“We have Amazon affiliate links in this article and we earn commission from qualifying purchases at zero cost to you. We only recommend products that we use ourselves and would never compromise the integrity of your build. This helps us bring you quality content and keep this site running.”
If you read the SSD buying guide, you will notice that I actually do NOT recommend using consumer drives for plotting, because generally, they are optimized for bursty performance, employ caching algorithms, are optimized for low power and battery life (even the high-performance desktop-only variants of M.2 form factor supports up to 8.25W compared with 25W on the U.2 NVMe), and most importantly have much less endurance than data center and enterprise SSDs.
I’ve had to buy a consumer NVMe for the NUC build, as well as my brother in laws build (since I had to get on Amazon prime quickly). There are a few models that are actually fairing pretty well so far. I also realize buying used data center drives on eBay is not everyone’s thing, so I’ll give a few easy options to order on Amazon for quick delivery (and speedy plotter builds!)
Beware of all these consumer models!!! There are a lot of different SKUs (variants) with slightly different names and different NAND, performance, and TBW (endurance). For instance, the MP600 has a European model off by one digit in the model string, and it’s half the endurance. The Inland Premium has another model called Platinum that is very bad for plotting. Please only click on the links or make sure you are searching for the right model!
Here are a few good options
Model | Capacity | TBW | Price | Status |
Corsair MP600 | 2TB | 3600 | $335 | Tested by JM! |
Inland Premium | 2TB | 3600 | $231 | Tested by JM! |
Seagate Firecuda 520 | 2TB | 2800 | $367 | Tested by Keybase users |
Corsair MP600
Corsair MP600 2TB NVMe M.2 80mm
Here is the system I used
Windows 10 2H20
Intel® Core™ i9-10850K Processor
64GB DDR4 3200
2x Corsair MP600 2TB NVMe M.2 80mm
Doing 8 plots from the Windows GUI to test on 2x RAID0 of 2TB MP600. This is 4TiB a day out of the box in Windows with no tuning…
Total time = 17194.226 seconds. CPU (144.460%) Mon Apr 19 22:22:50 2021
Total time = 17425.093 seconds. CPU (147.280%) Tue Apr 20 02:26:42 2021
Total time = 18024.871 seconds. CPU (143.810%) Mon Apr 19 23:36:42 2021
Total time = 15474.692 seconds. CPU (146.420%) Mon Apr 19 19:54:10 2021
Total time = 18469.125 seconds. CPU (140.920%) Tue Apr 20 00:44:07 2021
Total time = 15631.662 seconds. CPU (150.720%) Tue Apr 20 03:56:48 2021
Total time = 16473.111 seconds. CPU (151.050%) Tue Apr 20 03:10:50 2021
Total time = 16264.099 seconds. CPU (145.150%) Mon Apr 19 21:07:20 2021
Total time = 17880.898 seconds. CPU (144.280%) Tue Apr 20 01:34:19 2021
I’ve used these 2 drives to plot him 43.4TiB, and each drive has consumed 405TBW or 8%. This was done mostly on 1.0.3 so 1.8TiB per k=32 would equal 438 k=32 and approx 702TBW. Why am I consuming 810 total? (2x 405TB)? This is called write amp folks, read up about it in the SSD Endurance Wiki and the SNIA SSD endurance page (I’m the author).
=== START OF INFORMATION SECTION ===
Model Number: Force MP600
Serial Number: 21028230000xxxxxxx
Firmware Version: EGFM13.0
PCI Vendor/Subsystem ID: 0x1987
IEEE OUI Identifier: 0x6479a7
Total NVM Capacity: 2,000,398,934,016 [2.00 TB]
Unallocated NVM Capacity: 0
Controller ID: 1
NVMe Version: 1.3
Number of Namespaces: 1
Namespace 1 Size/Capacity: 2,000,398,934,016 [2.00 TB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: 6479a7 455020017e
Local Time is: Tue Apr 20 21:00:16 2021 PDT
Firmware Updates (0x12): 1 Slot, no Reset required
Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005d): Comp DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x08): Telmtry_Lg
Maximum Data Transfer Size: 512 Pages
Warning Comp. Temp. Threshold: 90 Celsius
Critical Comp. Temp. Threshold: 95 Celsius
Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 9.78W - - 0 0 0 0 0 0
1 + 6.75W - - 1 1 1 1 0 0
2 + 5.23W - - 2 2 2 2 0 0
3 - 0.0490W - - 3 3 3 3 2000 2000
4 - 0.0018W - - 4 4 4 4 25000 25000
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 2
1 - 4096 0 1
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 36 Celsius
Available Spare: 100%
Available Spare Threshold: 5%
Percentage Used: 8%
Data Units Read: 884,709,353 [452 TB]
Data Units Written: 791,078,494 [405 TB]
Host Read Commands: 8,022,828,443
Host Write Commands: 6,863,072,435
Controller Busy Time: 15,869
Power Cycles: 28
Power On Hours: 556
Unsafe Shutdowns: 17
Media and Data Integrity Errors: 0
Error Information Log Entries: 10
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Inland Premium
Inland Premium 2TB NVMe on Amazon (In Stock)
This one became famous in my NUC build. After many months of people claiming this drive to be good, I finally caved and bought one to test out with the new NUC, since I needed an M.2 80mm. I’ve plotted about 40TB on the NUC, which is nuts because I literally bought it just to play around with and do some testing. I have not seen any performance degradation or any variation in the plotting output on the NUC. It does not appear to be as fast as the Corsair MP600 in plotting but it has 3600TBW on the 2TB model, is M.2 80mm with no heatsink (so fits nicely in the NUC) and has some nice modern NVMe features. Overall, I’m actually impressed with this little guy.
$ sudo smartctl -a /dev/nvme0n1
=== START OF INFORMATION SECTION ===
Model Number: PCIe SSD
Serial Number: xxxxxxxxxxxxx
Firmware Version: ECFM13.3
PCI Vendor/Subsystem ID: 0x1987
IEEE OUI Identifier: 0x6479a7
Total NVM Capacity: 2,048,408,248,320 [2.04 TB]
Unallocated NVM Capacity: 0
Controller ID: 1
Number of Namespaces: 1
Namespace 1 Size/Capacity: 2,048,408,248,320 [2.04 TB]
Namespace 1 Formatted LBA Size: 4096
Namespace 1 IEEE EUI-64: 6479a7 4120300a54
Local Time is: Tue Apr 20 21:18:58 2021 PDT
Firmware Updates (0x12): 1 Slot, no Reset required
Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005d): Comp DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Maximum Data Transfer Size: 512 Pages
Warning Comp. Temp. Threshold: 75 Celsius
Critical Comp. Temp. Threshold: 80 Celsius
Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 9.51W - - 0 0 0 0 0 0
1 + 6.47W - - 1 1 1 1 0 0
2 + 4.96W - - 2 2 2 2 0 0
3 - 0.0490W - - 3 3 3 3 2000 2000
4 - 0.0018W - - 4 4 4 4 25000 25000
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 - 512 0 2
1 + 4096 0 1
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 27 Celsius
Available Spare: 100%
Available Spare Threshold: 5%
Percentage Used: 17%
Data Units Read: 1,626,085,234 [832 TB]
Data Units Written: 1,524,434,185 [780 TB]
Host Read Commands: 4,947,906,538
Host Write Commands: 1,581,467,338
Controller Busy Time: 30,718
Power Cycles: 17
Power On Hours: 991
Unsafe Shutdowns: 6
Media and Data Integrity Errors: 0
Error Information Log Entries: 10
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
I do not own either of these, but some community members have said they have had good luck with both of the models. The TBW checks out.
Seagate FireCuda 520 2TB all have 3600TBW
I’d throw this Sabrent with 3600 TBW into the mix: https://www.amazon.com/Sabrent-Internal-Extreme-Performance-SB-ROCKET-NVMe4-2TB/dp/B07TN1MNJ4
we will! I couldn’t remember the model. Sabrent (if you’re listening) also has a lot of garbage and QLC and very confusing product names. Didn’t want to confuse people.
Why are you using a Z590 because I am currently building exactly the same rig but I use a Z490? Does it have any benefits?
I thought it would be nicer than the Z490 prime, but that stupid Realtek 2.5GbE required either Ubuntu 21 or Windows. ethernet doesn’t work out of the box for Windows or Ubuntu…very lame. I actually would suggest just using the Z490, I’ve had great luck with that one on many different 10th gen CPUs
Okay, thank you for the fast reply. I think I will go for the ASUS ROG Strix Z490-E but only because I will use it later for a gaming pc build.
Did you get Ethernet to work? If yes, what did you have to do. Any suggestions would be greatly appreciated as I just bought the 590 motherboard.
merhabalar yaklaşık 4 saatir chia farming ile ilgili araştırma yapıyorum ama hala nasıl bir sistem kurmam gerektirğini anlamadım..
örnek
14 tb hdd ile kurulmuş sistem mi daha iyi gelir sağlar yoksa 2 tb nvme hard disk ile kurulmuş bir sistem mi? yardımcı olabılrsenız çok sevinirim..
2TB nvme (en fazla yazılabilme ömrü olanı seçmek mantıklı olan)ve 14 tb hdd ile beraber bir sistem kurman gerekiyor ne kadar fazla işlemci çekirdeği,ne kadar fazla ram ve ne kadar hızlı bir nvme olursa okadar iyi
Is Ubuntu server ok to use? Thanks!
Intel I9 10850K 3.60GHZ LGA1200 20M Işlemci
Asus ROG STRIX Z490-G GAMING Z490 DDR4 M.2 DP/HDMI PCI3.
GSKILL 64GB (4x16GB) TRIDENT Z DDR4 3200MHz CL16 1.35V Dual Kit RGB LED Ram
XPG Gammix S70 AGAMMIXS70-2T-C 2TB 7400/6400MB/s NVMe PCIe M.2 SSD Disk x2
4 Tb hdd
GİGABYTE P750 GM 750 power
bu şekilde bir sistem kurmayı düşünüyorum.. sizce nasıl bir sistem olur. hızlı getirisi ne olur ortalama şimdiden çok teşekkür ederim..
An interesting discussion is definitely worth comment.
I think that you should publish more about this topic,
it might not be a taboo matter but generally folks don’t speak about
such subjects. To the next! Many thanks!!
Any thoughts on the Corsair MP510 series? I bought the 2TB.
“Seagate FireCuda 520 2TB all have 3600TBW” – not TBW of up to 2800TB ? The link from your article also opening this drive with spec for TBW – up to 2800TB.
If you want consumer drives to perform to the level of datacenter grade drives, you need more of them, for example 4-6 vs 2 DC drives. In many cases, this means using an adapter and at least your GPU slot (each drive needs 4 PCI lanes). It’s mostly going to cost you more and be significantly less durable (e.g 3.6TWB vs >8 TBW). JM makes the point: Consumer drives are not engineered to do well with sustained, high load processes like plotting. DC and enterprise grade drives are.
Most people ignored the first paragraph of this post 😅
looks romantic, I have got similar HW…plotting time 30k s or 10 hours…but it is windows…I am going to try Linux
Should I buy two 1TB or one 2TB ssd? Which build is faster?
Two is better than 1!!
I have a very asimilar setup i9 10850k I noticed that you do 8 plots everytime giving 8 threads per plots what is your stagger time? my phase 1 time is always around 7000s how can your phase 1 achieve 5389s . And Thank you so much for being helpful all the time !
Don’t worry about times and much as output. I go into detail here https://youtu.be/yVLdR03-0bQ
Hello i need help
I buy
Asus z590plus
Cpu i9 10850
Ram 2×32 3200
And plootnig is wery slow
6h 8 plots
It about 2.5 tib 24h
2 nvme Sabrent 2 TB Rocket 4 Plus NVMe 4.0 Gen4 PCIe M.2 wewnętrzny dysk SSD ekstremalna wydajność R/W 7100/6600 MB/s (SB-RKT4P-2TB)
Nvme write spdedd max 3200MBs why?
can you show me your setup
Pls help
Check SSD sustained performance https://www.tomshardware.com/reviews/sabrent-rocket-4-plus-m2-nvme-ssd-review/3
That being said your system looks ok. I haven’t tested that SSD. Did you overclock?
Hello Storage_jm, thanks for the good guides!
I have a build with 32 gb ram (3200 mhz), a corsair mp600 2Tb (the European version with 1800 tbw), a ryzen 3700x (8 cores, 16 threads) and an asus PRIME X570-PRO.
When plotting 3 or 4 parallel plots, io on the mp600 is already the bottleneck. I do 3 parallel plots in +- 28000 seconds. This is a write benchmark of the mp600.
(base) tim@tim-desktop:~$ dd if=/dev/zero of=/mnt/tmp_nvme/tmp bs=64k count=20k; rm -f /mnt/tmp_nvme/tmp
20480+0 records in
20480+0 records out
1342177280 bytes (1,3 GB, 1,2 GiB) copied, 5,66301 s, 237 MB/s
At first, I thought the mp600 was thermal throttling but it’s constantly at 45 degree C.
Is there something to enable in the bios to get higher sustained writes? Or is my mp600 defective?
Thanks in advance!
Hi there Storage_jm,
Is there any external ssd you recommend for plotting?
thanks
Doing 8 plots from the Windows GUI to test on 2x RAID0 of 2TB MP600.
--------------------------------
Can you tell me how to set the 2xMP600 RAID0 on Z590? The (GIGABYTE Z590 AORUS Elite) official said, other brand of hard M.2 disk can not be set RAID, unless Intel.
Where did you get that case from would be ideal for the build I am hoping to achieve
Such a good tutorial!
Do you reckon it is better to have 2 of 2TB m.2 drives
or one 512 GB (which is for the system) and the other one 2TB (which is for temp plotting?
Thanks
I’m seeing corsair sells 3 versions of the MP600.
A force, core and a new pro version.
Core version uses qlc nand, so that one is out.
I suppose the new pro version has the newer Phison E18-controller over the Phison E16 in the force version.
Am I right that this post refers to the Corsair Force MP600?
I’m using Gigabyte AORUS 2TB NVME, speed-wise it’s slightly higher than the CORSAIR Force MP600, and it has a bronze plated heatsink which helps with cooling.
TBW is the same as the CORSAIR.
TBW is not properly ‘tested’ by the manufacturer, so I think it’s just an estimated number.
Some say the Samsung 970 PRO is the best one despite they might list a lower TBW.
But then, we don’t have data to prove this, take it as a pinch of salt.
all SSD vendors use the JESD219 specification from JEDEC for spec sheet TBW. People are confusing host TBW (amount of data host writes) vs drive rated TBW (JEDEC, typically worse than standard use)