Tel: +84-8-3500 3363 | Mail : sales@trungtammaychu.vn
Phone, Zalo : 0988801591 Skype : maychuxen
sales@trungtammaychu.vn
sales@trungtammaychu.vn
The Mellanox ConnectX-2 VPI Single-port and Dual-port QDR InfiniBand host channel adapters have the following features:
InfiniBand
ConnectX-2 delivers low latency, high bandwidth, and computing efficiency for performance-driven server and storage clustering applications. Efficient computing is achieved by offloading from the CPU routine activities, which makes more processor power available for the application. Network protocol processing and data movement impacts, such as InfiniBand RDMA and Send/Receive semantics, are completed in the adapter without CPU intervention. Graphical processing unit (GPU) communication acceleration provides additional efficiencies by eliminating unnecessary internal data copies, which significantly reduces application runtime. The ConnectX-2 advanced acceleration technology enables higher cluster efficiency and scalability of up to tens of thousands of nodes.
RDMA over converged Ethernet
ConnectX-2 utilizes the InfiniBand Trade Association's RDMA over Converged Ethernet (RoCE) technology to deliver similar low-latency and high-performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. The RoCE software stack maintains existing and future compatibility with bandwidth and latency sensitive applications. With link-level interoperability in the existing Ethernet infrastructure, network administrators can use existing data center fabric management solutions.
TCP/UDP/IP acceleration
Applications utilizing TCP/UDP/IP transport can achieve industry leading throughput over InfiniBand or 10 GbE adapters. The hardware-based stateless offload engines in ConnectX-2 reduce the CPU impact of IP packet transport, allowing more processor cycles to work on the application.
I/O virtualization
ConnectX-2 with Virtual Intelligent Queuing (Virtual-IQ) technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server. I/O virtualization with ConnectX-2 gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.
Storage accelerated
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can use InfiniBand RDMA for high-performance storage access. T11 compliant encapsulation (FCoIB or FCoE) with full hardware offload simplifies the storage network while keeping existing Fibre Channel targets.
Software support
All Mellanox adapter cards are supported by a full suite of drivers for Microsoft Windows, Linux distributions, VMware, and Citrix XENServer. ConnectX-2 VPI adapters support OpenFabrics-based RDMA protocols and software. Stateless offload is fully interoperable with standard TCP/ UDP/IP stacks. ConnectX-2 VPI adapters are compatible with configuration and management tools from OEMs and operating system vendors.
The adapters have the following specifications:
InfiniBand specifications:
Enhanced InfiniBand specifications:
Ethernet specifications:
Hardware-based I/O virtualization:
Additional CPU offloads:
Storage support:
Management and tools:
InfiniBand:
Ethernet:
Protocol support:
The adapters have the following physical specifications (without the bracket):
The adapters are supported in the following environment:
Operating temperature: 0 to 55° C
Air flow: 200 LFM at 55° C
Power consumption (typical):
Power consumption (maximum):
Manual : Download