Kalpana Kalpana (Editor)

RDMA over Converged Ethernet

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
RDMA over Converged Ethernet

RDMA over Converged Ethernet (RoCE) is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. There are two RoCE versions, RoCE v1 and RoCE v2. RoCE v1 is an Ethernet link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. RoCE v2 is an internet layer protocol which means that RoCE v2 packets can be routed. Although the RoCE protocol benefits from the characteristics of a converged Ethernet network, the protocol can also be used on a traditional or non-converged Ethernet network.

Contents

Background

Network-intensive applications like networked storage or cluster computing need a network infrastructure with a high bandwidth and low latency. The advantages of RDMA over other network application programming interfaces such as Berkeley sockets are lower latency, lower CPU load and higher bandwidth. The RoCE protocol allows lower latencies than its predecessor, the iWARP protocol. There exist RoCE HCAs (Host Channel Adapter) with a latency as low as 1.3 microseconds while the lowest known iWARP HCA latency in 2011 was 3 microseconds.

RoCE v1

The RoCE v1 protocol is an Ethernet link layer protocol with ethertype 0x8915. This means that the frame length limits of the Ethernet protocol apply - 1500 bytes for a regular Ethernet frame and 9000 bytes for a jumbo frame.

RoCE v2

The RoCEv2 protocol exists on top of either the UDP/IPv4 or the UDP/IPv6 protocol. The UDP destination port number 4791 has been reserved for RoCE v2. Since RoCEv2 packets are routable the RoCE v2 protocol is sometimes called Routable RoCE or RRoCE. Although in general the delivery order of UDP packets is not guaranteed, the RoCEv2 specification requires that packets with the same UDP source port and the same destination address must not be reordered. In addition, RoCEv2 defines a congestion control mechanism that uses the IP ECN bits for marking and CNP frames for the acknowledgment notification. Software support for RoCE v2 is still emerging. Mellanox OFED 2.3 or later has RoCE v2 support and also Linux Kernel v4.5.

RoCE versus InfiniBand

RoCE defines how to perform RDMA over Ethernet while the InfiniBand architecture specification defines how to perform RDMA over an InfiniBand network. RoCE was expected to bring InfiniBand applications, which are predominantly based on clusters, onto a common Ethernet converged fabric. Others expected that InfiniBand will keep offering a higher bandwidth and lower latency than what is possible over Ethernet.

The technical differences between the RoCE and InfiniBand protocols are:

  • Congestion Control: RoCE relies on a lossless network via Ethernet flow control or priority flow control (PFC). RoCEv2 defines a congestion control protocol that uses ECN for marking and CNP frames for acknowledgments. InfiniBand uses a credit-based algorithm to guarantee lossless HCA-to-HCA communication.
  • Available InfiniBand switches have always had a lower latency than Ethernet switches. Port-to-port latency for one particular type of Ethernet switch is 230 ns versus 100 ns for an InfiniBand switch with the same number of ports.
  • Configuring a DCB Ethernet network can be more complex than configuring an InfiniBand network.
  • RoCE versus iWARP

    While the RoCE protocols define how to perform RDMA using Ethernet and UDP/IP frames, the iWARP protocol defines how to perform RDMA over a connection-oriented transport like the Transmission Control Protocol (TCP). RoCE v1 is limited to a single Ethernet broadcast domain. RoCE v2 and iWARP packets are routable. The memory requirements of a large number of connections along with TCP's flow and reliability controls lead to scalability and performance issues when using iWARP in large-scale datacenters and for large-scale applications (i.e. large-scale enterprises, cloud computing, web 2.0 applications etc.). Also, multicast is defined in the RoCE specification while the current iWARP specification does not define how to perform multicast RDMA.

    Criticism

    Some aspects that could have been defined in the RoCE specification have been left out. These are:

  • How to translate between primary RoCE v1 GIDs and Ethernet MAC addresses.
  • How to translate between secondary RoCE v1 GIDs and Ethernet MAC addresses. It is not clear whether it is possible to implement secondary GIDs in the RoCE v1 protocol without adding a RoCE-specific address resolution protocol.
  • How to implement VLANs for the RoCE v1 protocol. Current RoCE v1 implementations store the VLAN ID in the twelfth and thirteenth byte of the sixteen-byte GID, although the RoCE v1 specification does not mention VLANs at all.
  • How to translate between RoCE v1 multicast GIDs and Ethernet MAC addresses. Implementations in 2010 used the same address mapping that has been specified for mapping IPv6 multicast addresses to Ethernet MAC addresses.
  • How to restrict RoCE v1 multicast traffic to a subset of the ports of an Ethernet switch. As of September 2013, an equivalent of the Multicast Listener Discovery protocol has not yet been defined for RoCE v1.
  • At least one vendor that offers an RDMA over Ethernet solution has chosen another wire protocol than RoCE.
  • References

    RDMA over Converged Ethernet Wikipedia