InfiniBand: 10 Little-known Facts

7 August 2013
mil-aero-blog_0.png

InfiniBand is a scalable, switched, serial fabric used as a system interconnect in High Performance Computing (HPC) and in GE Intelligent Platforms’ case, High Performance Embedded Computing (HPEC). It can be carried over copper cables, fiber optics or, as is most prevalent in mil/aero systems, copper backplanes.

1. InfiniBand is not Ethernet, although the two have many things in common. Both can be used to carry TCP/IP, the lingua franca of the Internet. They can even carry each other’s traffic via Ethernet over InfiniBand (EoIB) and InfiniBand over Ethernet (IBoE).

2. InfiniBand (IB) is generally faster than Ethernet. When 10GbE became available, IB was running at DDR and QDR rates (up to 3.2x speedup). Now 40GbE is emerging, IB is already deployed at FDR rates (2.5x). Figures assume a 4x InfiniBand connection.

Capture

3. InfiniBand is not proprietary. Interface and switch chips are available from Mellanox and Intel. Cores for FPGAs are available from several sources. GE led the adoption into OpenVPX, with other vendors now starting to follow the lead.

4. InfiniBand is the connection of choice on 41% of the computers on the TOP500 supercomputer list.mil aero blog

5. InfiniBand is not hard to use. It can present a standard socket interface, or it can underpin common middleware APIs such as MPI and DDS. Sockets Direct Protocol (SDP) allows an application that uses stream sockets to take advantage of RDMA performance without modification.

6. InfiniBand is optimized for fat tree networks with layered switches. Its advanced routing schemes take advantage of the topology. Ethernet, especially in unmanaged- and Layer 2 managed implementations, must run tree-spanning protocols to identify and nullify multiple paths and loops. Data Center Bridging (DCB) technology will alleviate this, but it is slow to come to the embedded space.

7. InfiniBand is not exclusive. For instance, Mellanox chips can be initialized to be InfiniBand, Ethernet or a mix of the two. The switch must match the fabric chosen or must be capable of handling both protocols.

8. InfiniBand is highly scalable. As nodes are added to a system to attempt to reach optimal performance on a parallelizable application, benchmarks across a range of common HPC codes show that Ethernet/TCP implementations do not scale beyond a certain point and show diminishing returns. InfiniBand shows high scalability.

9. InfiniBand is not new. It has been used in HPEC systems for almost 10 years. It has certainly evolved along the way, from SDR to FDR rates, but the technology and software support is mature.

10. InfiniBand is here, deployed and worthy of consideration for any high-performance system.