Mellanox ConnectX -5 VPI Adapter Card for Open Compute Project (OCP)

Item #: 5GF077   |   Model #: 7290108486513

  |   10 Questions
Regular price
Sale price
Regular price
Sold out
Unit price

1 Each

Special Buy

Type Value
Brand Name Mellanox
Depth 4.3"
Device Supported Server/Switch
Environmental Certification cTUVus
Environmentally Friendly Yes
Expansion Slot Type QSFP28
Form Factor Plug-in Card
Height 3.1"
Host Interface PCI Express 3.0 x16
Manufacturer Mellanox Technologies Ltd
Manufacturer Part Number MCX545A-ECAN
Manufacturer Website Address
Marketing Information Intelligent RDMA-enabled network adapter card with advanced application offload and Multi-Host capabilities for High-Performance Computing, Web2.0, Cloud, and Storage platforms

ConnectX-5 with Virtual Protocol Interconnect® supports 100Gb/s InfiniBand and Ethernet connectivity, sub-600ns latency and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for Open Compute Project servers and storage appliances while supporting the most demanding applications and markets: Machine Learning, Data Analytics, and more..


ConnectX-5 delivers high bandwidth, low latency, and high computation efficiency for high performance, data intensive and scalable compute and storage platforms. ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/PGAS and Rendezvous Tag Matching offload, hardware support for out-of-order RDMA Write and Read operations, as well as additional Network Atomic and PCIe Atomic operations support. ConnectX-5 VPI for Open Compute Project (OCP) utilizes both IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technologies, delivering low-latency and high performance. ConnectX-5 enhances RDMA network capabilities by completing the Switch Adaptive-Routing capabilities and supporting data delivered out-of-order, while maintaining ordered completion semantics, providing multipath reliability and efficient support for all network topologies including DragonFly and DragonFly+. ConnectX-5 also supports GPUDirect® and Burst Buffer offload for background checkpointing without interfering in the main CPU operations, and the innovative transport service Dynamic Connected Transport (DCT) to ensure extreme scalability for compute and storage systems.


NVMe storage devices are gaining popularity, offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhacements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency. As with the earlier generations of ConnectX adapters, standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design, Host Chaining, by which different servers can interconnect directly without involving the Top of the Rack (ToR) switch. Alternatively, the Multi-Host technology that was first introduced with ConnectX-4 can be used. Mellanox Multi-Host™ technology, when enabled, allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces. With the various new rack design alternatives, ConnectX-5 lowers the total cost of ownership (TCO) in the data center by reducing CAPEX (cables, NICs, and switch port expenses), and by reducing OPEX by cutting down on switch port management and overall power usage.


Cloud and Web2.0 customers that are developing their platforms on Software Defined Network (SDN) environments, are leveraging their servers' Operating System Virtual-Switching capabilities to enable maximum flexibility.
Media Type Supported Optical Fiber
Network Technology 100GBase-X
Product Line ConnectX-5
Product Name ConnectX -5 VPI Adapter Card for Open Compute Project (OCP)
Product Type 100Gigabit Ethernet Card
Total Number of Ports 1