Supriya Ghosh (Editor)

Fusion io NVMFS

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

SanDisk/Fusion-io's NVMFS file system, formerly known as Direct File System (DFS), accesses flash memory via a virtual flash storage layer instead of using the traditional block layer API. This file system has two main novel features. First, it lays out files directly in a very large virtual storage address space. Second, it leverages the virtual flash storage layer to perform block allocations and atomic updates. As a result, NVMFS performs better and is much simpler than a traditional Unix file system with similar functionalities. Additionally, this approach avoids the log-on-log performance issues triggered by log-structured file systems. Microbenchmark results show that NVMFS can deliver 94,000 I/O operations per second (IOPS) for direct reads and 71,000 IOPS for direct writes with the virtualized flash storage layer on top of a first-generation Fusion-io ioDrive. For direct access performance, NVMFS is consistently better than ext3 on the same platform, sometimes by 20%. For buffered access performance, NVMFS is also consistently better than ext3, and sometimes by over 149%. Application benchmarks show that NVMFS outperforms ext3 by 7% to 250% while requiring less CPU power. Additionally, I/O latency is lower with NVMFS compared to ext3.

Flash Memory API

The API used by NVMFS to access flash memory consists of:

  • An address space that is several orders of magnitude larger than the storage capacity of the flash memory.
  • Read, append and trim/deallocate/discard primitives.
  • Atomic writes.
  • The layer that provides this API is called the virtualized flash storage layer in the DFS paper. It is the responsibility of this layer to perform block allocation, wear leveling, garbage collection, crash recovery, address translation and also to make the address translation data structures persistent.

    References

    Fusion-io NVMFS Wikipedia