Speedier storage rarely stands as the initial priority for gamers seeking their next upgrade. Throughout history, enthusiasts have seldom placed it at the forefront. Cast your mind back to an era (if you're old enough to do so) when pioneering with the fastest storage device would result in loading onto a server so quickly, that you'd be waiting what seemed like an eternity for your friends to arrive.
Commercial SCSI drives weren't financially viable, and the interface hadn't trickled down to consumer motherboards. This left gamers with a mob of IDE / SATA drives that were often indistinguishable. In 2003, Western Digital finally broke the mould with the Velociraptor, punching in with a rapid 10,000 rpm spindle and 37 GB capacity on a single platter. This was an industry first, with the best consumer alternatives only offering up to 7,200 rpm.
Surprisingly, the drive wasn't a true SATA device, using a Marvel interface bridge that converted it from legacy PATA format. Despite this, it was fast enough to tread on the toes of commercial SCSI drives of the period, making it very attractive for enthusiasts after its blistering storage performance. Having four in a RAID array was a feast for all the senses, with needle clattering akin to a thousand Geiger counters thrown into the heart of a nuclear reactor. However, the noise was a small price to pay for record-shattering speeds.
By today's standards, these monsters are a relic of the past in the enthusiast space. Very few people will have these steampunk metallic bricks crunching away inside their water-cooled systems. Mechanical drives are now reserved solely for capacity and redundancy, either gathering dust in a living room NAS or playing a more crucial role in a managed cloud.
The concept of solid-state storage had been established decades before it eventually reached the living room PC. Having a personal system comprised entirely of solid-state storage was a glorious thing. One of the biggest bus bottlenecks in the computational chain was finally being unshackled, no longer restricted to the speed of moving and mechanical parts. Fast forward to today, and the options are almost endless. SATA 3.0 SSDs are available in capacities worthy of the most expansive Steam library, and even past-gen NVME PCI drives in large capacity are becoming a viable option for builds on a tighter budget.
My trusty 1TB Gen 3.0 NVME Sabrant Rocket has served me well. Spanning no less than four motherboards, over the last few years, it has scoured the depths of the Steam store, covering many terabytes in a constant mind struggle as to which games need to enter the deleted shadow realm. It also offers blistering speeds that ol' clunks a lot or a conventional SATA 3.0 SSD could only dream of, with a blistering 3,400mb/s; but how does an ageing GEN 3.0 nVME stack up in 2023?
Crucial's new T700 is the industry's first GEN 5.0 nVME drive. Offering up to 12,400MB/s sequential reads and up to 11,800MB/s sequential writes it leaves our GEN 3.0 Sabrant dead in the water, at least on paper.
Sequential and Random Data Patterns: What's the difference?
The metric that matters most to you depends on workflow and how you intend to use the drive. Sequential read and write speeds matter for transferring large files. The drive accesses blocks of data clumped together, making light work of the transfer. Random read or write performance is accessing several small files situated in different locations on the drive. This kind of data access is more costly, but more indicative of day-to-day use in most cases. For instance, opening a Word document or a game.
Queue Depth and Threads
Queue depth is a critical factor in optimizing I/O request handling, dictating the number of active queues for data transfer. The greater the number of queues processing data concurrently, the higher the potential for achieving faster transfer speeds. By default, CrystalDiskMark assesses performance at queue depths of 1, 8, and 32. However, you have the flexibility to manually raise the queue depth to conduct tests according to your preferences. To visualize this, envision a queue as an individual adept at filing documents – the more individuals you have, the swifter the filing process becomes.
Elevating the queue depth often yields elevated transfer speeds, regardless of the block size or thread count. This effect is particularly pronounced in random workloads. Drawing from the analogy of filing cabinets, having two people filing papers in tandem is notably quicker than a solo effort. Transitioning from a queue depth of one to 32 has the potential to yield up to 10 times the transfer speeds, showcasing a substantial improvement.
Threads, on the other hand, reside in the CPU rather than the storage domain. CPUs consist of cores, each typically equipped with one or two threads – analogous to queues in the storage realm. A surplus of threads facilitates multitasking, enabling efficient management of multiple tasks concurrently. In the context of CrystalDiskMark, threads play a less prominent role, as the default configuration predominantly employs a single thread count across seven out of eight tests. Only one test employs a thread count of 16, showcasing their limited influence on this benchmark.
By optimizing both queue depth and threads, you can unlock enhanced system performance, effectively harnessing the potential of your hardware for improved data handling and transfer speeds.
We'll try to stay away from Duplo analogies here! Data files are composed of blocks, which represent the largest chunks of information moved during an input/output (I/O) operation. CrystalDiskMark's default tests encompass various block sizes, offering insights into performance variations. These tests feature:
Counterintuitively, larger block sizes correlate with faster transfer speeds. This analogy can be likened to transporting individual pieces of paper versus moving an entire folder into a filing cabinet. In the context of sequential file transfers, larger block sizes are commonly employed, while smaller block sizes are utilized for random workloads. However, it's important to note that CrystalDiskMark's allocation of large block sizes to sequential tests and small block sizes to random tests doesn't inherently imply sequential or random characteristics.
Benchmark Metrics & large file transfers
Crystal Disk Default Preset: Test Conditions:
Sequential reads and writes on the Crucial T700 leave the GEN 3.0 drives in the dust, making light work of large file transfers. In the Real World profile in Crystal Disk, we're seeing a 121% and 102% uplift from the Sabrent Rocket in sequential read and write, making the GEN 5.0 drive very appealing for productivity. The WD Raptor cameo and Samsung 860 SSD act as a history lesson here in terms of sustained write and read performance. SATA 3.0-based SSDs and mechanical drives cannot compete with PCIE.
Do GEN 5.0 drives get hot?
Using the PCIE 5.0 add-in card and heatsink that ships with the ROG Maximus Z790 Apex, nVME drive temperatures aren't an issue. Under Crystal Disk, we're unable to break 40 degrees Celsius on the Crucial T700.
Ready, Steady, Load
Although the T700 takes the crown, traditional storage options like SATA 3.0 SSDs typically offer satisfactory performance. This is because games often utilize compressed assets during loading, and any potential bottlenecks tend to emerge from the CPU's processing capacity. Even with substantial read speeds, achieving significant gains can be challenging. The effectiveness hinges on the decompression technique and file sizes employed by developers.
However, as can be seen by our sequential transfer results in Crystal Bench, if you're doing a lot of productivity and frequent manipulation of large files GEN 5.0 drives can offer a massive uplift in transfer speeds.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.