The first version of the Non-Volatile Memory Express (NVMe) standard was ratified almost five years ago, but its development didn't stop there. While SSD controller manufacturers have been hard at work implementing NVMe in more and more products, the protocol itself has acquired new features. Most of them are optional and most are intended for enterprise scenarios like virtualization and multi-path I/O, but one feature introduced in the NVMe 1.2 revision has been picked up by a controller that will likely see use in the consumer space.

The Host Memory Buffer (HMB) feature in NVMe 1.2 allows a drive to request exclusive access to a portion of the host system's RAM for the drive's private use. This kind of capability has been around forever in the GPU space under names like HyperMemory and TurboCache, where it served a similar purpose: to reduce or eliminate the dedicated RAM that needs to be included on peripheral devices.

Modern high-performance SSD controllers use a significant amount of RAM, and typically we see a ratio of 1GB of RAM for every 1TB of flash. The controllers are usually conservative about using that RAM as a cache for user data (to limit the damage of a sudden power loss) and instead it is used to store the organizational metadata necessary for the controller to keep track of what data is stored where on the flash chips. The goal is that when the drive recieves a read or write request, it can determine which flash memory location needs to be accessed based on a much quicker lookup in the controller's DRAM, and the drive doesn't need to update the metadata copy stored on the flash after every single write operation is completed. For fast consistent performance, the data structures are chosen to minimize the amount of computation and number of RAM lookups required at the expense of requiring more RAM.

At the low end of the SSD market, recent controller configurations have chosen instead to cut costs by not including any external DRAM. There are combined savings of die size and pin count for the controller in this configuration, as well as reduced PCB complexity for the drive and eliminating the DRAM chip from the bill of materials, which can add up to a competitive advantage in the product segments where performance is a secondary concern and every cent counts. Silicon Motion's DRAM-less SM2246XT controller has stolen some market share from their own already cheap SM2246EN, and in the TLC space almost everybody is moving toward DRAM-less options.

The downside is that without ample RAM, it is much harder for SSDs to offer high performance. Even with clever firmware, DRAM-less SSDs can cope surprisingly well with just the on-chip buffers, but they are still at a disadvantage. That's where the Host Memory Buffer feature comes in. With only two NAND channels on the 88NV1140, it probably can't saturate the PCIe 3.0 x1 link under even the best circumstances, so there will be bandwidth to spare for other transfers with the host system. PCIe transactions and host DRAM accesses are measured in tens or hundreds of nanoseconds compared to tens of microseconds for reading from flash, so it's clear that a Host Memory Buffer can be fast enough to be useful for a low-end drive.

The trick then is to figure out how to get the most out of a Host Memory Buffer, while remaining prepared to operate in DRAM-less mode if the host's NVMe driver doesn't support HMB or if the host decides it can't spare the RAM. SSD suppliers are universally tight-lipped about the algorithms used in their firmware and Marvell controllers are usually paired with custom or third-party licensed firmware anyways, so we can only speculate about how a HMB will be used with this new 88NV1140 controller. Furthermore, the requirement of driver support on the host side means this feature will likely be used in embedded platforms long before it finds its way into retail SSDs, and this particular Marvell controller may never show up in a standalone drive. But in a few years time it might be standard for low-end SSDs to borrow a bit of your system's RAM. This becomes less of a concern as we move through successive platforms having access to more DRAM per module in a standard system.

Source: Marvell

POST A COMMENT

33 Comments

View All Comments

  • close - Wednesday, January 13, 2016 - link

    No, the 5.25" SSD was a joke for ddriver :). We need products for every segment because not all people can afford the high end models. It doesn't mean they should not have at least a lower end SSD with no cache. As long as it does the job calling them lousy is a gross exaggeration. Reply
  • Flunk - Tuesday, January 12, 2016 - link

    Windows dropped support for all hardware sound acceleration as of Vista. The only way to utilize hardware acceleration in sound devices is to use the fairly unpopular OpenAL standard. All current model sound cards don't support hardware acceleration, the last card to do so being the Creative Labs Xi-Fi which is 2 generations behind now.

    You can get hardware network cards, but they're normally only used in servers because they cost at least $200.
    Reply
  • extide - Tuesday, January 12, 2016 - link

    While you are correct about the sound cards (sadly), pretty much all decent client network cards (ie stuff from Intel, Broadcom, Qualcomm Atheros, etc) will support several forms of hardware offloading (like tcp/ip checksums, and a few other things). Reply
  • ddriver - Tuesday, January 12, 2016 - link

    Professional sound cards (also known as audio interfaces) completely bypass that pile of rubbish known as "windows apis". Reply
  • ddriver - Tuesday, January 12, 2016 - link

    Thus MS decided to stop supporting hardware accelerated audio, because hardware accelerated audio has long bypassed windows completely. There is no need to support something that bypasses your APIs. That doesn't mean there is no hardware accelerated audio and there is no benefit from it. Reply
  • smilingcrow - Tuesday, January 12, 2016 - link

    "Bring on cheap TLC 2TB drives!"

    Yeah, because it's the cost of that 1GB of RAM per TB that's keeping SSD prices high for sure.
    RAM at retail is currently ~£4 per GB. 1TB SSD at retail is £200+. You do the maths. ;)
    Reply
  • eldakka - Tuesday, January 12, 2016 - link

    As the article stated, it's not just the BoM of the RAM itself. Putting the RAM on the SSD itself incurs costs in:
    1) PCB space to put the RAM on (which could be used for more flash, some M.2 and mSATA SSDs can't physically put more flash on them);
    2) Pinouts on the SSD controller chip to interface with the RAM, taking up more space, more pins, therefore more $$;
    3) the traces between the controller and the RAM, again taking up more space, more complexity, havint to be applied to the PCB and being designed in etc.

    So that $6 RAM chip might cost another $4 to actually install on the PCB and increase the cost of the controller (due to extra pinouts etc) by another $2. Therefore effectively adjusting the sale price of the consumer SSD by $20, which when a drive might be $130 rather than $150 is quite significant.
    Reply
  • ddriver - Tuesday, January 12, 2016 - link

    Oh wow, you boosted the cost by like 50 cents. It is hard to argue with such massive numbers :)

    Today SSDs borrow RAM from your PC, can't wait for the bright future, when SSDs will borrow storage space from your PC. I bet it will drop end user prices by at least 5% and profit margins by at least 5000%. A brilliant business strategy.
    Reply
  • ddriver - Tuesday, January 12, 2016 - link

    *and BOOST profit margins by at least 5000%

    Come on AT, it is the 21st century, where is the edit button? Or are you saving that for the upcoming century?
    Reply
  • ImSpartacus - Tuesday, January 12, 2016 - link

    This sounds like a reasonable way to get ssds into places that they aren't currently occupying. Very neat. Reply

Log in

Don't have an account? Sign up now