Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Note that we've updated our C300 results on our new Sandy Bridge platform for these Iometer tests. As a result you'll see some higher scores for this drive (mostly with our 6Gbps numbers) for direct comparison to the m4 and other new 6Gbps drives we've tested.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Iometer—4KB Random Write, 8GB LBA Space, QD=3

If there's one thing Crucial focused on with the m4 it's random write speeds. The 256GB m4 is our new king of the hill when it comes to random write performance. It's actually faster than a Vertex 3 when writing highly compressible data. It doesn't matter if I run our random write test for 3 minutes or an hour, the performance over 6Gbps is still over 200MB/s.

Let's look at average write latency during this 3 minute run:

Average Write Latency—4KB Random Write

On average it takes Crucial 0.06ms to complete three 4KB writes spread out over an 8GB LBA space. The original C300 was pretty fast here already at 0.07ms—it's clear that these two drives are very closely related. Note that OCZ's Vertex 3 has a similar average latency but it's not actually writing most of the data to NAND—remember this is highly compressible data, most of it never hits NAND.

Now let's look at max latency during this same 3 minute period:

Max Write Latency—4KB Random Write

You'll notice a huge increase in max latency compared to average latency, that's because this is when a lot of drives do some real-time garbage collection. If you don't periodically clean up your writes you'll end up increasing max latency significantly. You'll notice that even the Vertex 3 with SandForce's controller has a pretty high max latency in comparison to its average latency. This is where the best controllers do their work. However not all OSes deal with this occasional high latency blip all that well. I've noticed that OS X in particular doesn't handle unexpectedly high write latencies very well, usually resulting in you having to force-quit an application.

Note the extremely low max latency of the m4 here: 4.3ms. Either the m4 is ultra quick at running through its garbage collection routines or it's putting off some of the work until later. I couldn't get a clear answer from Crucial on this one, but I suspect it's the latter. I'm going to break the standard SSD review mold here for a second and take you through our TRIM investigation. Here's what a clean sequential pass looks like on the m4:

Average read speeds are nearing 400MB/s, average write speed is 240MB/s. The fluctuating max write speed indicates some clean up work is being done during the sequential write process. Now let's fill the drive with data, then write randomly across all LBAs at a queue depth of 32 for 20 minutes and run another HDTach pass:

Ugh. This graph looks a lot like what we saw with the C300. Without TRIM the m4 can degrade to a very, very low performance state. Windows 7's Resource Monitor even reported instantaneous write speeds as low as 2MB/s. The good news is the performance curve trends upward: the m4 is trying to clean up its performance. Write sequentially to the drive and its performance should start to recover. The bad news is that Crucial appears to be putting off this garbage collection work a bit too late. Remember that the trick to NAND management is balancing wear leveling with write amplification. Clean blocks too quickly and you burn through program/erase cycles. Clean them too late and you risk high write amplification (and reduced performance). Each controller manufacturer decides the best balance for its SSD. Typically the best controllers do a lot of intelligent write combining and organization early on and delay cleaning as much as possible. The C300 and m4 both appear to push the limits of delayed block cleaning however. Based on the very low max random write latencies from above I'd say that Crucial is likely doing most of the heavy block cleaning during sequential writes and not during random writes. Note that in this tortured state—max write random latencies can reach as high as 1.4 seconds.

Here's a comparison of the same torture test run on Intel's SSD 320:

The 320 definitely suffers, just not as bad as the m4. Remember the higher max write latencies from above? I'm guessing that's why. Intel seems to be doing more cleanup along the way.

And just to calm all fears—if we do a full TRIM of the entire drive performance goes back to normal on the m4:

What does all of this mean? It means that it's physically possible for the m4, if hammered with a particularly gruesome workload (or a mostly naughty workload for a longer period of time), to end up in a pretty poor performance state. I had the same complaint about the C300 if you'll remember from last year. If you're running an OS without TRIM support, then the m4 is a definite pass. Even with TRIM enabled and a sufficiently random workload, you'll want to skip the m4 as well.

I suspect for most desktop workloads this worst case scenario won't be a problem and with TRIM the drive's behavior over the long run should be kept in check. Crucial still seems to put off garbage collection longer than most SSDs I've played with, and I'm not sure that's necessarily the best decision.

Forgive the detour, now let's get back to the rest of the data.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0—5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer—4KB Random Write, 8GB LBA Space, QD=32

High queue depth 4KB random write numbers continue to be very impressive, although here the Vertex 3 actually jumps ahead of the m4.

Iometer—4KB Random Read, QD=3

Random read performance is actually lower than on the C300. Crucial indicated that it reduced random read performance in favor of increasing sequential read performance on the m4. We'll see what this does to real world performance shortly.

Crucial's m4, Micron's C400 Sequential Read/Write Speed
POST A COMMENT

103 Comments

View All Comments

  • Nentor - Thursday, March 31, 2011 - link

    "And for those of you asking about my thoughts on the recent OCZ related stuff that has been making the rounds, expect to see all of that addressed in our review of the final Vertex 3."

    Too late Anand and you well know it. It has no place hidden in some unwritten review about next generation hardware either.

    I don't think people talking about that matter are that concerned with your thoughts on it, but about you speaking out on a product you reviewed that turned out to be very good and now is available in shops in the same box and shell but with different hardware inside and performance.

    Anyone might end up buying one of these things based on your good review of it and end up with quite another product when they return home.

    That is the point and you failed it quite horribly professionally and personally.
    Reply
  • Anand Lal Shimpi - Thursday, March 31, 2011 - link

    The 25nm fiasco happened while I was out of the country covering MWC. I was thousands of miles away from any testbeds. When it happened I immediately contacted OCZ's CEO and asked for his plan to make it right. To date I believe they have addressed all present concerns by allowing users to exchange drives with 64Gbit 25nm NAND for 32Gbit drives. It's my understanding that small capacity 64Gbit die drives have been discontinued. There are still some 64Gbit devices in the channel and I pushed for a name change on the impacted product but it looks like the best OCZ is willing to do is point you at the model number to (possibly) determine what you're getting.

    I finally got a pair of 25nm drives in this week - I wasn't going to make any public statements based on product I haven't tested personally. Unfortunately both drives, the 60GB 'E' and 120GB non-E use 32Gbit NAND devices.

    OCZ shouldn't have handled this the way it did initially. Lower performing drives should never have hit the market and they shouldn't have tried to charge people for replacements. However the company did respond quickly and I believe has made things right for those users who are impacted based on what I've seen here:

    http://www.ocztechnologyforum.com/forum/showthread...

    Regardless this is another check in the wrong column for OCZ and it will be addressed - not hidden - (as well as the SpecTek memory stuff) in an upcoming article. My original plan was to wrap that, the m4, Corsair P3 and Samsung 470 all into our Intel 320 review however being at CTIA last week left me with little time to get all of that done.

    I would've liked to have been on top of all of this from the start, and had OCZ not made things right publicly early on I would've stepped in (there was a lot of prodding from me behind the scenes during MWC week). The timing was unfortunate and I'm looking to bring on a regular storage editor to help ensure this sort of thing doesn't happen in the future. With all of the growth in SSDs as well as the increase in demand for HDD coverage, it's time to grow the storage team on AT.

    Take care,
    Anand
    Reply
  • cactusdog - Thursday, March 31, 2011 - link

    Anand, OCZ have only made things partially right, the issue is not solved. Ocz are swapping drives to meet IDEMA specs but performance is still slower. So they only made it 50% right.

    Its not the fact that its a slower drive, but they are using the same branding then making it impossible for users to know which nand is being used before the drive is purchased.

    The bigger issue is if it is ethical for a company to change specs and use the same branding. Afterall, Intel and Corsair saw fit to rebrand their 25nm drives. Other companies at least changed the model number.

    The spectek issue is another can of worms for ocz but it raises the same kind of questions about Ocz ethics and transparency.
    Reply
  • Anand Lal Shimpi - Thursday, March 31, 2011 - link

    This is why I wanted to get drives in house. With 25nm 120GB and 60GB drives in hand now I can start looking at performance. In theory with the same number of die there shouldn't be any performance difference. If there is, something else is at play.

    It is absolutely unethical for a manufacturer to change performance and sell under the same product name. Let me do some testing and I'll touch on this very soon.

    Take care,
    Anand
    Reply
  • Gami - Thursday, March 31, 2011 - link

    the problem with you new test is which drives are you getting..

    The first version of the 25nm SSDs that they tried to secrectly get through..

    or the final version after all the complaints and the switch to the bigger nand chips..

    the very first ones, they used only used 8 channels to connect those nand chips.. you need to get one of these drives as well..

    after all the complaints, they finally admitted what they done, and said you could trade in for proper sized SSDs but you had to pay a different in price, even though you already paid for what you're finally being given..

    after another run of bad press and complaints, they backed off the whole pay the different and just gave you a new SSD witht he right configs.

    it's still less space and less performance than the original Vertex2 that you originally bought.. and also has less of a life span.

    (if you had bought these the first few months of the change to 25nm, you were also paying the full price of the 32nm chips) there was no savings, you paid more for a worthless new SSD that had the same markings on it, as the one that was rated number one SSD for the year.
    Reply
  • mikato - Friday, April 1, 2011 - link

    "However the company did respond quickly and I believe has made things right for those users who are impacted based on what I've seen here"

    Actually, it looks like they only made things right for the people that noticed they were tricked a bit and complained. One might argue that there was no impact to those who didn't notice since hey, they got an OCZ Vertex 2 didn't they, but most of us wouldn't agree with that because they could put a Vertex 2 label on a box of dog crap if they wanted.

    FWIW I bought a Vertex 2 120GB in early January. I'm happy with it and I'm pretty sure it's not 25nm based on what I've read for the releases etc. but I haven't checked on it myself with any test. If it turned out to be 25nm and with worse specs, I probably won't return it to avoid the hassle but that doesn't mean I didn't get shortchanged.
    Reply
  • anevmann - Thursday, March 31, 2011 - link

    "seequential" in the Performance vs transfer size :P

    But a great ssd review as always Anand ;)
    Reply
  • Anand Lal Shimpi - Thursday, March 31, 2011 - link

    Thanks for the comment and the heads up :)

    Take care,
    Anand
    Reply
  • anevmann - Thursday, March 31, 2011 - link

    Any news when/if TRIM will be supported in raids in the future? Reply
  • forgotdre - Thursday, March 31, 2011 - link

    samsung 470 review please! I haven't heard much about it and it seems like a great drive! Reply

Log in

Don't have an account? Sign up now