Supriya Ghosh (Editor)

OneFS distributed file system

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Developer(s)
  
Isilon Systems

Directory contents
  
B+ trees

Full name
  
OneFS

File allocation
  
B+ trees

Introduced
  
2003 with OneFS 1.0 -- based on FreeBSD

Max. volume size
  
15PB+ (143+ nodes at 108TB each); 65535 nodes theoretical limit

The OneFS file system is a parallel distributed networked file system designed by Isilon Systems for use in its Isilon IQ storage appliances. OneFS is a FreeBSD variant and utilizes zsh as its shell. OneFS has its own specialized command set, all of which (with the exceptions of the Isilon extensions to the Unix "ls" and "chmod" commands) start with "isi", which is used to administer the system.

Contents

On-disk Structure

All data structures in the OneFS file system maintain their own protection information. This means in the same filesystem, one file may be protected at +1 (basic parity protection) while another may be protected at +4 (resilient to four failures) while yet another file may be protected at 2x (mirroring); this feature is referred to as FlexProtect. FlexProtect is also responsible for automatically rebuilding the data in the event of a failure. The protection levels available are based on the number of nodes in the cluster and follow the Reed Solomon Algorithm. Blocks for an individual file are spread across the nodes; for example, block 0 may be on Node 3, block 1 on Node 1, and the related parity block on Node 5. This allows entire nodes to fail without losing access to any data. File metadata, directories, snapshot structures, quotas structures, and a logical inode mapping structure are all based on mirrored B+ trees. Block addresses are generalized 64-bit pointers that reference (node, drive, blknum) tuples. The native block size is 8192 bytes; inodes are 512 bytes on disk.

One distinctive characteristic of OneFS is that metadata is spread throughout the nodes in a homogeneous fashion. There are no dedicated metadata servers. The only piece of metadata that is replicated on every node is the address list of root btree blocks of the inode mapping structure. Everything else can be found from that starting point, following the generalized 64-bit pointers.

Clustering

Nodes running OneFS must be connected together with a high performance, low-latency back-end network for optimal performance. OneFS 1.0-3.0 used Gigabit Ethernet as that back-end network. Starting with OneFS 3.5, Isilon offered Infiniband models. Now all nodes sold utilize an Infiniband back-end.

Data, metadata, locking, transaction, group management, allocation, and event traffic go over the back-end RPC system. All data and metadata transfers are zero-copy. All modification operations to on-disk structures are transactional and journaled.

Protocols

OneFS is equipped with options for accessing storage via NFS, CIFS/SMB, FTP, HTTP, iSCSI, and HDFS. It can utilize non-local authentication such as Active Directory, LDAP, and NIS. It is also capable of interfacing with backup devices using NDMP.

References

OneFS distributed file system Wikipedia