Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  Transcend SSD340 JMicron JMF667H (Toshiba NAND) JMicron JMF667H (IMFT NAND) Samsung SSD 840 EVO mSATA Crucial MX100
25% OP

Right from the start, things do not look too promising. Compared to the reference design with the same IMFT NAND, the IO consistency is considerably lower. The reference design manages around 1,000 IOPS minimum, whereas in the SSD340 the minimum performance is around 300 IOPS. Increasing the over-provisioning helps by a bit but the consistency is still poor compared to the other value drives (like the MX100). The older firmware definitely isn't doing Transcend any favors here – quite the opposite in fact.

  Transcend SSD340 JMicron JMF667H (Toshiba NAND) JMicron JMF667H (IMFT NAND) Samsung SSD 840 EVO mSATA Crucial MX100
25% OP


  Transcend SSD340 JMicron JMF667H (Toshiba NAND) JMicron JMF667H (IMFT NAND) Samsung SSD 840 EVO mSATA Crucial MX100
25% OP

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 30 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.

And it is. 

Introduction, The Drive & The Test AnandTech Storage Bench 2013
Comments Locked


View All Comments

  • jabber - Monday, August 4, 2014 - link

    Hmm for most general use its reads not writes. So I doubt most normal folks would notice.
  • hojnikb - Monday, August 4, 2014 - link

    Again, install OS (win to go is pretty easy to setup for example) to a cheap flashdrive and come back :)

    Even though there is plenty more reads than writes in client world, its still important that random writes don't sux, because the moment OS will try to write something is the moment everything will freeze (think jmicron 602)
  • TheWrongChristian - Monday, August 4, 2014 - link

    Random writes suck if they block reads. That was the problem with the old jmicron controllers, a high latency write would block everything including reads.

    With good command queuing, and non-blocking writes, reads should still be low latency, and for boot and application startup, it's read latency that counts. The OS can mask write latency pretty well, to the point that you're unlikely to notice much difference on a desktop.

    On a server, you're much more likely to notice write latencies however. Think database servers writing log data, or a file server waiting for a file write before acknowledging a sync. But even there, a file server can batch write file updates from many clients (or use the sequential journal for data) and the database similarly decomposes synchronous writes to sequential log files.

    So all in all, so long as writes don't block unrelated reads, you should be fine.
  • jabber - Tuesday, August 5, 2014 - link

    As it happens I rebuilt a Sony all in one PC with just one of the exact drives in this review. Worked fine. Installed swiftly with no issues. There are benchmarks...and then there is using it in the real world and often real world is very different to those.
  • Friendly0Fire - Tuesday, August 5, 2014 - link

    The point is that according to the table in this review you can get a flat-out better SSD *for the same price*, unless you're looking for the 64gb size in which case a measly $20 will upgrade to 128gb. The value proposition just isn't there.
  • jabber - Wednesday, August 6, 2014 - link

    Well I got mine for £65 and the next cheapest 200+GB SSD was £85 so was worth it. Thats pounds...not dollars. Thats a $32 difference for very little difference in general usage.
  • MrFixitx - Monday, August 4, 2014 - link

    I am honestly not at all surprised by these results. Transcend has for years been the maker of "value" NAND based products. From camera memory cards to usb thumb drives.

    I have been burned by their compact flash cards before and would not recommend their flash based products for anything where reliability is critical.
  • velanapontinha - Monday, August 4, 2014 - link

    Hi, Kristian.

    Any chance of reviewing the SSD370 line anytime soon? These are dirt cheap and should prove a lot better overall than the SSD340.
  • Kristian Vättö - Monday, August 4, 2014 - link

    I don't have the drive yet but it's certainly on the list of SSDs to review.
  • saliti - Tuesday, August 5, 2014 - link

    What about Samsung 845 DC Pro review?

Log in

Don't have an account? Sign up now