• TRUNG TÂM MÁY CHỦ

      GIẢI PHÁP MÁY CHỦ CHUYÊN NGHIỆP

      Tel: +84-8-3500 3363 | Mail : sales@trungtammaychu.vn

      Phone, Zalo : 0988801591  Skype : maychuxen

    Kiểm Tra Bảo Hành

    hỗ trợ khách hàng

  • 0936060436 (Mr. Khánh)

    sales1@trungtammaychu.vn

  • 0988801591 (Mr. Thao)

    sales@trungtammaychu.vn

    • Card mạng Nvidia Mellanox MHQH19B-XTR ConnectX-2 VPI Adapter Card

      [ Part number: MHQH19B-XTR ]

      Giá: Liên hệ

    Chọn mua

    Mua hàng

    The Mellanox ConnectX-2 VPI Single-port and Dual-port QDR InfiniBand host channel adapters have the following features:

    InfiniBand

    ConnectX-2 delivers low latency, high bandwidth, and computing efficiency for performance-driven server and storage clustering applications. Efficient computing is achieved by offloading from the CPU routine activities, which makes more processor power available for the application. Network protocol processing and data movement impacts, such as InfiniBand RDMA and Send/Receive semantics, are completed in the adapter without CPU intervention. Graphical processing unit (GPU) communication acceleration provides additional efficiencies by eliminating unnecessary internal data copies, which significantly reduces application runtime. The ConnectX-2 advanced acceleration technology enables higher cluster efficiency and scalability of up to tens of thousands of nodes.

    RDMA over converged Ethernet

    ConnectX-2 utilizes the InfiniBand Trade Association's RDMA over Converged Ethernet (RoCE) technology to deliver similar low-latency and high-performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. The RoCE software stack maintains existing and future compatibility with bandwidth and latency sensitive applications. With link-level interoperability in the existing Ethernet infrastructure, network administrators can use existing data center fabric management solutions.

    TCP/UDP/IP acceleration

    Applications utilizing TCP/UDP/IP transport can achieve industry leading throughput over InfiniBand or 10 GbE adapters. The hardware-based stateless offload engines in ConnectX-2 reduce the CPU impact of IP packet transport, allowing more processor cycles to work on the application.

    I/O virtualization

    ConnectX-2 with Virtual Intelligent Queuing (Virtual-IQ) technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server. I/O virtualization with ConnectX-2 gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.

    Storage accelerated

    A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can use InfiniBand RDMA for high-performance storage access. T11 compliant encapsulation (FCoIB or FCoE) with full hardware offload simplifies the storage network while keeping existing Fibre Channel targets.

    Software support

    All Mellanox adapter cards are supported by a full suite of drivers for Microsoft Windows, Linux distributions, VMware, and Citrix XENServer. ConnectX-2 VPI adapters support OpenFabrics-based RDMA protocols and software. Stateless offload is fully interoperable with standard TCP/ UDP/IP stacks. ConnectX-2 VPI adapters are compatible with configuration and management tools from OEMs and operating system vendors.

    Specifications

    The adapters have the following specifications:

    • Low-profile adapter form factor
    • Ports: One or two 40 Gbps InfiniBand interfaces (40/20/10 Gbps auto-negotiation) with QSFP connectors
    • ASIC: Mellanox ConnectX-2
    • Host interface: PCI Express 2.0 x8 (5.0 GTps)
    • Interoperable with InfiniBand or 10G Ethernet switches

    InfiniBand specifications:

    • IBTA Specification 1.2.1 compliant
    • RDMA, Send/Receive semantics
    • Hardware-based congestion control
    • 16 million I/O channels
    • 256 to 4 KB MTU, 1 GB messages
    • Nine virtual lanes: Eight data and one management

    Enhanced InfiniBand specifications:

    • Hardware-based reliable transport
    • Hardware-based reliable multicast
    • Extended Reliable Connected transport
    • Enhanced Atomic operations
    • Fine grained end-to-end quality of server (QoS)

    Ethernet specifications:

    • IEEE 802.3ae 10Gb Ethernet
    • IEEE 802.3ad Link Aggregation and Failover
    • IEEE 802.1Q, 1p VLAN tags and priority
    • IEEE P802.1au D2.0 Congestion Notification
    • IEEE P802.1az D0.2 ETS
    • IEEE P802.1bb D1.0 Priority-based Flow Control
    • Multicast
    • Jumbo frame support (10 KB)
    • 128 MAC/VLAN addresses per port

    Hardware-based I/O virtualization:

    • Address translation and protection
    • Multiple queues per virtual machine
    • VMware NetQueue support

    Additional CPU offloads:

    • TCP/UDP/IP stateless offload
    • Intelligent interrupt coalescence
    • Compliant with Microsoft RSS and NetDMA

    Storage support:

    • Fibre Channel over InfiniBand ready
    • Fibre Channel over Ethernet ready

    Management and tools:

    InfiniBand:

    • OpenSM
    • Interoperable with third-party subnet managers
    • Firmware and debug tools (MFT and IBDIAG)

    Ethernet:

    • MIB, MIB-II, MIB-II Extensions, RMON, and RMON 2
    • Configuration and diagnostic tools

    Protocol support:

    • Open MPI, OSU MVAPICH, Intel MPI, MS MPI, and Platform MPI
    • TCP/UDP, EoIB, IPoIB, SDP, and RDS
    • SRP, iSER, NFS RDMA, FCoIB, and FCoE
    • uDAPL

    Physical specifications

    The adapters have the following physical specifications (without the bracket):

    • Single port: 2.1 in x 5.6 in (54 mm x 142 mm)
    • Dual port: 2.7 in. x 6.6 in. (69 mm x 168 mm)

    Operating environment

    The adapters are supported in the following environment:

    Operating temperature: 0 to 55° C
    Air flow: 200 LFM at 55° C

    Power consumption (typical):

    • Single-port adapter: 7.0 W typical
    • Dual-port adapter: 8.8 W typical (both ports active)

    Power consumption (maximum):

    • Single-port adapter: 7.7W maximum for passive cables only; 9.7W maximum for active optic modules
    • Dual-port adapter: 9.4W maximum for passive cables only, 13.4W maximum for active optic modules

    Manual : Download

    TOP