Three months ago we previewed the first client focused SF-2200 SSD: OCZ's Vertex 3. The 240GB sample OCZ sent for the preview was four firmware revisions older than what ended up shipping to retail last month, but we hoped that the preview numbers were indicative of final performance.

The first drives off the line when OCZ went to production were 120GB capacity models. These drives have 128GiB of NAND on board and 111GiB of user accessible space, the remaining 12.7% is used for redundancy in the event of NAND failure and spare area for bad block allocation and block recycling.

Unfortunately the 120GB models didn't perform as well as the 240GB sample we previewed. To understand why, we need to understand a bit about basic SSD architecture. SandForce's SF-2200 controller has 8 channels that it can access concurrently, it looks sort of like this:

Each arrowed line represents a single 8-byte channel. In reality, SF's NAND channels are routed from one side of the chip so you'll actually see all NAND devices to the right of the controller on actual shipping hardware.

Even though there are 8 NAND channels on the controller, you can put multiple NAND devices on a single channel. Two NAND devices can't be actively transferring data at the same time. Instead what happens is one chip is accessed while another is either idle or busy with internal operations.

When you read from or write to NAND you don't write directly to the pages, you instead deal with an intermediate register that holds the data as it comes from or goes to a page in NAND. The process of reading/programming is a multi-step endeavor that doesn't complete in a single cycle. Thus you can hand off a read request to one NAND device and then while it's fetching the data from an internal page, you can go off and program a separate NAND device on the same channel.

Because of this parallelism that's akin to pipelining, with the right workload and a controller that's smart enough to interleave operations across NAND devices, an 8-channel drive with 16 NAND devices can outperform the same drive with 8 NAND devices. Note that the advantage can't be double since ultimately you can only transfer data to/from one device at a time, but there's room for non-insignificant improvement. Confused?

Let's look at a hypothetical SSD where a read operation takes 5 cycles. With a single die per channel, 8-byte wide data bus and no interleaving that gives us peak bandwidth of 8 bytes every 5 clocks. With a large workload, after 15 clock cycles at most we could get 24 bytes of data from this NAND device.

Hypothetical single channel SSD, 1 read can be issued every 5 clocks, data is received on the 5th clock

Let's take the same SSD, with the same latency but double the number of NAND devices per channel and enable interleaving. Assuming we have the same large workload, after 15 clock cycles we would've read 40 bytes, an increase of 66%.

Hypothetical single channel SSD, 1 read can be issued every 5 clocks, data is received on the 5th clock, interleaved operation

This example is overly simplified and it makes a lot of assumptions, but it shows you how you can make better use of a single channel through interleaving requests across multiple NAND die.

The same sort of parallelism applies within a single NAND device. The whole point of the move to 25nm was to increase NAND density, thus you can now get a 64Gbit NAND device with only a single 64Gbit die inside. If you need more than 64Gbit per device however you have to bundle multiple die in a single package. Just as we saw at the 34nm node, it's possible to offer configurations with 1, 2 and 4 die in a single NAND package. With multiple die in a package, it's possible to interleave read/program requests within the individual package as well. Again you don't get 2 or 4x performance improvements since only one die can be transferring data at a time, but interleaving requests across multiple die does help fill any bubbles in the pipeline resulting in higher overall throughput.

Intel's 128Gbit 25nm MLC NAND features two 64Gbit die in a single package

Now that we understand the basics of interleaving, let's look at the configurations of a couple of Vertex 3s.

The 120GB Vertex 3 we reviewed a while back has sixteen NAND devices, eight on each side of the PCB:

OCZ Vertex 3 120GB - front

These are Intel 25nm NAND devices, looking at the part number tells us a little bit about them.

You can ignore the first three characters in the part number, they tell you that you're looking at Intel NAND. Characters 4 - 6 (if you sin and count at 1) indicate the density of the package, in this case 64G means 64Gbits or 8GB. The next two characters indicate the device bus width (8-bytes). Now the ninth character is the important one - it tells you the number of die inside the package. These parts are marked A, which corresponds to one die per device. The second to last character is also important, here E stands for 25nm.

Now let's look at the 240GB model:

OCZ Vertex 3 240GB - Front

OCZ Vertex 3 240GB - Back

Once again we have sixteen NAND devices, eight on each side. OCZ standardized on Intel 25nm NAND for both capacities initially. The density string on the 240GB drive is 16B for 16Gbytes (128 Gbit), which makes sense given the drive has twice the capacity.

A look at the ninth character on these chips and you see the letter C, which in Intel NAND nomenclature stands for 2 die per package (J is for 4 die per package if you were wondering).

While OCZ's 120GB drive can interleave read/program operations across two NAND die per channel, the 240GB drive can interleave across a total of four NAND die per channel. The end result is a significant improvement in performance as we noticed in our review of the 120GB drive.

OCZ Vertex 3 Lineup
Specs (6Gbps) 120GB 240GB 480GB
Raw NAND Capacity 128GB 256GB 512GB
Spare Area ~12.7% ~12.7% ~12.7%
User Capacity 111.8GB 223.5GB 447.0GB
Number of NAND Devices 16 16 16
Number of die per Device 1 2 4
Max Read Up to 550MB/s Up to 550MB/s Up to 530MB/s
Max Write Up to 500MB/s Up to 520MB/s Up to 450MB/s
4KB Random Read 20K IOPS 40K IOPS 50K IOPS
4KB Random Write 60K IOPS 60K IOPS 40K IOPS
MSRP $249.99 $499.99 $1799.99

The big question we had back then was how much of the 120/240GB performance delta was due to a reduction in performance due to final firmware vs. a lack of physical die. With a final, shipping 240GB Vertex 3 in hand I can say that the performance is identical to our preview sample - in other words the performance advantage is purely due to the benefits of intra-device die interleaving.

If you want to skip ahead to the conclusion feel free to, the results on the following pages are near identical to what we saw in our preview of the 240GB drive. I won't be offended :)

The Test


Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO


Intel DX58SO (Intel X58)

Intel H67 Motherboard


Intel X58 + Marvell SATA 6Gbps PCIe

Intel H67
Chipset Drivers:

Intel + Intel IMSM 8.9

Intel + Intel RST 10.2

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64


Random & Sequential Performance
Comments Locked


View All Comments

  • ggathagan - Friday, May 6, 2011 - link

    Your takeaway is correct.

    I see a lot of comments in this and other SSD articles that talk about Sandforce not being ready for prime time.
    As Anand has repeatedly mentioned, one of the drawbacks for a small, niche-technology company is an inability to match companies like Intel in the R&D arena.
    The consumer, in essence, has been the R&D department for Sandforce.

    When it comes to the average user, there are no performance examples that I can think of where a Sandforce-based drive will make enough difference to make it worth the risk over Intel.
    Cutting edge technology is not worth it in this particular arena.
  • FunBunny2 - Friday, May 6, 2011 - link

    -- When it comes to the average user, there are no performance examples that I can think of where a Sandforce-based drive will make enough difference to make it worth the risk over Intel.

    And, given that Enterprise data is frequently encrypted/compressed into databases, they won't be much use in Enterprise. Marvell might end up the winner as OEM controller vendor.
  • AlainD - Friday, May 6, 2011 - link


    I've found that AS-SSD has a compression benchmark. Would be nice to add those to the reviews. Probably only 6GBps is usefull.

    I'm surprised that the new sandforce controller is capable of sequential reading approx. 500 MB even if the data is compressed. Writing seems to be another story and then the AS-SSD compression benchmark could be some extra info.
  • Casper42 - Friday, May 6, 2011 - link

    Bought my wife a new HP DV6T Quad Core Sandy and bought it with a traditional 640GB drive because I knew I wanted a (what I call) 3rd generation SSD which the dont offer yet.

    However, I noticed the Intel Storage Manager is in the System Tray and that leads me to beleive the Intel SATA driver is probably loaded too. I would think they wouldnt load this due to the previous issues with the driver breaking TRIM support under 7, but then thought perhaps they load a slightly different image on machines they ship with Factory SSDs.

    Anyway, to cut to the point, Do the latest Intel SATA drivers still break Win7 TRIM support? Or has that all been fixed and I just wasn't paying attention?

  • InsaneScientist - Saturday, May 7, 2011 - link

    As of version 9.6 (march of last year, I think), yes the Intel drivers will work with TRIM if the drives aren't in a RAID array.
  • Casper42 - Friday, May 6, 2011 - link

    Due to the speed differences in the 120 vs 240 and the nature of SF drives and incompressible data.....

    I am wondering what your recommendation would be for a 120GB drive going into a Windows laptop that will be running a very traditional desktop software load. Not a lot of games, not a lot of Video Editing. Mainly just surfing the net and running Office type apps?

    Budget is less of a concern than speed and reliability, but within reason. I will take a 10-20% hit on price to get a better product but approach 50% and things change.
  • anonapon - Friday, May 6, 2011 - link

    I'm curious how this will affect SSD caching. When they come out, I'm hoping to put together a system as soon as possible with a Z68 board and I've been thinking about having a really great drive like the Vertex 3 as my system and application drive, either one 240 or two 120s in RAID0, and using a small and reliable but otherwise mediocre SSD to cache a single large mechanical drive; but, as far as I understand it, SSD caching will require the latest Intel 10.x RST drivers, and OCZ's firmware doesn't work with the latest Intel 10.x RST drivers.

    Is this other people's understanding as well, and does anyone know if there will be a solution soon?
  • mars2k - Monday, May 9, 2011 - link

    I have IRST loaded F6 during clean Win7 SP1 x64 install on SB 2820QM, 240Gb Vertex3. Loaded Win 7 with a thumb drive. It took ten minutes or less from 1rst boot for the intall to browsing the internet. Way cool!
    I am experiencing some hangs. There is a fix for this on the OCZ forums. Will try.
  • BLU82 - Friday, May 6, 2011 - link

    Great review as always Anand, but have a couple of questions for you. Do you know if you will get a sample of the 480GB V3? Not that any of us could afford one, but since these are parallel devices I would be interested to see how its performance would stack up against the 120 and 240. I am assuming that they just use the "J" designated chips over the "C"???

    Also, do you know what differences the newer "Max IOPS' versions would perform over these first releases? I returned a vanilla 120GB version right after I saw where OCZ introduced the MAX IOPS version but still not sure whether to go that route, try and RAID 0 two 120GB vanilla versions, MAX IOPS version, or get one 240GB vanilla or MAX IOPS version??? I liked the 120GB version and really find it hard to try and run my Sandy Bridge on a Velociraptor again, but since it was a first release and what you pointed out between the 120 and 240 versions made me think I should have just waited on a 240. But man, that price tag. Which, brings me back to the 480GB version. Don't hear anyone talking about it (due to the price I'm sure), but I would be very curious about the performance and if there are any differences between it and the 240.

    Thanks again Anand. I've actually been reviewing your site since it's inception way back in the mid-late 90's. You do a great job and always have.
  • neotiger - Saturday, May 7, 2011 - link

    In your random read benchmark, Vertex 3 120GB clocked in at 35MB/s while Corsair Force 120GB achieved a much better performance at 58MB/s.

    Corsair Force actually use the last generation of SandForce. So why is the new generation of SandForce so much slower than the last generation product?

    Was there an error in the benchmark?

Log in

Don't have an account? Sign up now