I’m sure there are data science/center people that can appreciate this. For me all I’m thinking is how hot it runs and how much I wish soon 20TB SSDs would be priced like HDDs
The trouble with ridiculous R/W numbers like these is not that there’s no theoretical benefit to faster storage, it’s that the quoted numbers are always for sequential access, whereas most desktop workloads are more frequently closer to random, which flash memory kinda sucks at. Even really good SSDs only deliver ~100MB/sec in pure random access scenarios. This is why you don’t really feel any difference between a decent PCIe 3.0 M.2 drive and one of these insane-o PCI-E 5.0 drives, unless you’re doing a lot of bulk copying of large files on a regular basis.
It’s also why Intel Optane drives became the steal of the century when they went on clearance after Intel abandoned the tech. Optane is basically as fast in random access as in sequential access, which means that in some scenarios even a PCIe 3.0 Optane drive can feel much, much snappier than a PCIe 4 .0 or 5.0 SSD that looks faster on paper.
Why was Optane so good with random access? Why did Intel abandon the tech?
Agree 1 lane of pci4.0 per M.2 SSD is enough.
Give me more slots instead.
IMO another example of pushing numbers ahead of what’s actually needed, and benefitting manufacturers way more than the end user. Get this for bragging rights? Sure, you do you. Some server/enterprise niche use case? Maybe. But I’m sure that for 90% of people, including even those with a bit more demanding storage requirements, a PCIe 4 NVMe drive is still plenty in terms of throughput. At the same time SSD prices have been hovering around the same point for the past 3-4-5 years, and there hasn’t been significant development in capacity - 8 TB models are still rare and disproportionately expensive, almost exotic. I personally would be much more excited to see a cool, efficient and reasonably priced 8/16 TB PCIe 4 drive than a pointlessly fast 1/2/4 TB PCIe 5.
I never understood this kind of objection. You yourself state that maybe 10% of users can find some good use for this - and that means that we should stop developing the technology until some arbitrary, higher threshold is met? 10% of users is an incredibly big amount! Why is that too little for this development to make sense?
I’m not saying “don’t make progress”, I’m saying “try to make progress across the board”.
That’s not how R&D works. It’s really rare to have “progress across the board”, usually you have incremental improvements in specific areas that come together to an across-the-board improvement.
So we’d be getting improvements slower since there’s much less profit from individual advancements, as they can’t be released. What’s the advantage here?