• TRUNG TÂM MÁY CHỦ

      GIẢI PHÁP MÁY CHỦ CHUYÊN NGHIỆP

      Tel: +84-8-3500 3363 | Mail : sales@trungtammaychu.vn

      Phone, Zalo : 0988801591  Skype : maychuxen

    Kiểm Tra Bảo Hành

    hỗ trợ khách hàng

  • 0936060436 (Mr. Khánh)

    sales1@trungtammaychu.vn

  • 0988801591 (Mr. Thao)

    sales@trungtammaychu.vn

    • HP InfiniBand FDR/EN 544FLR-QSFP

      [ Part number: 649282-B21 ]

      HP InfiniBand FDR/EN 10/40Gb Dual Port 544FLR-QSFP Adapter

      Giá: Liên hệ

    Chọn mua

    Mua hàng

    HP InfiniBand Options for HP ProLiant and Integrity Servers

     

    HP supports 56 Gbps Fourteen Data rate (FDR), 40Gbps 4x Quad Data Rate (QDR) InfiniBand products that include Host Channel Adapters (HCA), HP FlexLOM adaptors, switches, and cables for HP ProLiant servers, and HP Integrity servers.


    For details on the InfiniBand support for HP BladeSystem c-Class and server blades, please refer to the HP InfiniBand for HP BladeSystem c-Class QuickSpecs at: http://h18000.www1.hp.com/products/quickspecs/12586_div/12586_div.html.

    HP supports InfiniBand products from InfiniBand technology partners Mellanox and QLogic.

    The following InfiniBand adaptor products based on Mellanox technologies are available from HP

    • HP IB FDR/EN 10/40Gb 2P 544QSFP Adaptor (Dual IP/IB)
    • HP IB FDR/EN 10/40Gb 2P 544FLR-QSFP Adaptor (Dual IP/IB)
    • HP IB QDR/EN 10Gb 2P 544FLR-QSFP Adaptor(Dual IP/IB
    • HP IB 4X QDR CX-2 PCI-e G2 Dual-port HCA

    The HP IB FDR/EN 10/40Gb 2P 544QSFP Adaptor, HP IB FDR/EN 10/40Gb 2P 544FLR-QSFP Adaptor and HP IB QDR/EN 10Gb 2P 544FLR-QSFP Adaptor are based on the Mellanox ConnectX-3 IB technology. The HP IB 4X QDR PCI-e Dual-port is based on the Mellanox ConnectX-2 IB technology. The FDR IB HCA delivers low-latency and up to 56Gbps (FDR) bandwidth for performance-driven server and storage clustering applications in High-Performance Computing (HPC) and enterprise data centers. The HP IB FDR/EN 10/40Gb 2P Adaptors are also capable of dual 40 or 10 Gb Ethernet ports. The HP IB FDR/EN 10/40Gb 2P 544 HCA card is designed for PCI Express 3.0 x8 connectors on HP Gen8 servers.

    InfiniBand host stack software (driver) is required to run on servers connected to the InfiniBand fabric. For HCAs based on Mellanox technologies, HP supports Mellanox OFED driver stacks on Linux 64-bit operating systems, and Mellanox WinOF driver stack on Microsoft Windows (HPC) server 2008. 

    An InfiniBand fabric is constructed with one or more InfiniBand switches connected via inter-switch links. The most commonly deployed fabric topology is a fat tree or its variations. A subnet manager is required to manage an InfiniBand fabric. OpenSM is a host-based subnet manager that runs on a server connected to the InfiniBand fabric. Mellanox OFED software stack includes OpenSM for Linux, and Mellanox WinOF includes OpenSM for Windows. For comprehensive management and monitoring capabilities, Mellanox FabricIT™ is recommended for managing the InfiniBand fabric based on Mellanox InfiniBand products. 

    The following InfiniBand switch products based on Mellanox technologies are available from HP

    • Mellanox IB FDR 36-port Managed switch (front-to-rear cooling)
    • Mellanox IB FDR 36-port Managed switch with reversed airflow fan unit (rear-to-front cooling)
    • Mellanox IB FDR 36-port switch (front-to-rear cooling)
    • Mellanox IB FDR 36-port switch with reversed airflow fan unit (rear-to-front cooling)
    • Voltaire IB QDR 36-port switch (front-to-rear cooling)
    • Voltaire IB QDR 36-port switch with reversed airflow fan unit (rear-to-front cooling)
    • Voltaire IBQDR 162-port (144 ports fully non-blocking) director switch
    • Voltaire IB QDR 324-port director switch
    • Mellanox IB QDR/FDR 648, 324 and 216 port Modular Switches

    The front-to-rear cooling switch has air flow from the front (Power side) to the rear (ports side), and the rear-to-front cooling switch has air flow from the rear (ports side) to the front.

    For HCAs based on Mellanox technologies, HP also supports Mellanox OFED driver stacks on Linux 64-bit operating system. A subnet manager is required to manage an InfiniBand fabric. For comprehensive management and monitoring capabilities, Mellanox Unified Fabric Manager™ (UFM) is recommended for managing the InfiniBand fabric based on Voltaire InfiniBand switch products and Mellanox HCA products with Mellanox OFED stack.

    The following Mellanox software for InfiniBand switches and adaptors is available from HP

    • Mellanox Unified Fabric Manager (UFM)
    • Mellanox Unified Fabric Manager Advanced (UFM Advanced)
    • Mellanox Acceleration Software (VMA)

    Mellanox Unified Fabric Manager™ (UFM™) is a powerful platform for managing scale-out computing environments. UFM enables data center operators to efficiently provision, monitor and operate the modern data center fabric. 

    UFM runs on a server and is used to monitor and , analyze Mellanox fabrics health and performance. UFM also can be used to automate provisioning and device management tasks. For example, UFM can communicate with devices, to reset or shut down ports or devices, perform firmware and software upgrades, etc. UFM's extensive API enables it to easily integrate with existing management tools for a unified cluster view.

    Mellanox UFM Advanced adds a number of enterprise features to UFM. This includes the ability to save historical information, to send alerts to external systems and to activate user made scripts based on system events.

    Mellanox Acceleration Software consists of Mellanox Messaging Accelerator (VMA) technology for minimum latency communication. VMA performs multicast, unicast and TCP acceleration using OS bypass, and is implemented via a BSD-socket compliant dynamically linked library so that no application changes are required.

    The following InfiniBand products based on QLogic technologies are available from HP

    • QLogic IB 4X QDR PCI-e G2 Dual-port HCA
    • QLogic IB 4X QDR 36-port Switch
    • QLogic IB 4X QDR 324-Port Switch
    • QLogic IB 4X QDR 648-Port Switch

    The QLogic IB 4X QDR HCA is part of the QLogic family of InfiniBand Host Channel Adapters based on the TrueScale ASIC architecture. It has a unique hardware architecture that delivers unprecedented levels of performance, reliability, and scalability, making it an ideal solution for highly scaled High Performance Computing (HPC) and high throughput, low-latency enterprise applications.

    InfiniBand host stack software (driver) is required to run on servers connected to the InfiniBand fabric. For HCAs based on QLogic technology, HP supports QLogic OFED driver stacks on Linux 64-bit operating systems. 

    QLogic IB 4X QDR 36 Port Switch, QLogic IB 4X QDR 324 Port Switch and QLogic IB 4X QDR 648 Port Switch are based on QLogic TrueScale™ technology. The QLogic 36-port switch supports QLogic BLc 4X QDR IB Management Module (505959-B21). QLogic 324- and 648-port switch chassis are configurable: with the QLogic 18-port high performance leaf modules, they can be configured to support up to 324 and 648 ports, respectively; with the QLogic 24-port high density leaf modules, they can be configured to support up to 432 and 864 ports, respectively, with 2:1 oversubscribed bandwidth on the backplane.

    An InfiniBand fabric is constructed with one or more InfiniBand switches connected via inter-switch links. The most commonly deployed fabric topology is a fat tree or its variations. A subnet manager is required to manage an InfiniBand fabric. OpenSM is a host-based subnet manager that runs on a server connected to the InfiniBand fabric. QLogic OFED software stack includes OpenSM for Linux. For comprehensive management and monitoring capability, QLogic InfiniBand Fabric Suite (IFS) is recommended for managing the InfiniBand fabric based on QLogic InfiniBand products. 

    HP supports InfiniBand copper and fiber optic cables with QSFP to QSFP connectors.

    • QSFP to QSFP FDR copper cables range from .5M to 3M for HCA to switch, or inter-switch links at FDR speed.
    • QSFP to QSFP FDR fiber optic cables range from 3M to 30M for HCA to switch, or inter-switch links at FDR speed.
    • QSFP to QSFP QDR copper cables range from .5M to 7M for HCA to switch, or inter-switch links at either DDR or QDR speed (please note that QLogic QDR switches only support up to 5 meters at QDR speed) , and up to 10M at DDR speed.
    • QSFP to QSFP QDR fiber optic cables range from 3M to 30M for HCA to switch, or inter-switch links at either DDR or QDR speed.

    What's New
    • Mellanox 36 Port QDR/FDR10 InfiniBand Switches (four models

    At A Glance
    • 19U, 19" rack mountable chassis with up to 9 fabric boards, up to 18 18-port line boards, up to 2 redundant management modules, up to 6 redundant power supplies, 3 fan units
    • Up to 324 4X QDR QSFP ports with 324P fabric boards, or up to 648 4X QDR QSFP ports with 648P fabric boards with 108 CPX cables, support 4X QDR InfiniBand copper and fiber optic cables
    • Dual Function Ethernet/InfiniBand HCA cards based on Mellanox CX-3 technologies 
      • HP IB FDR/EN 10/40Gb 2P 544QSFP Adaptor
        • Dual QSFP ports Dual function PCI-e gen 3 card based on ConnectX-3 technology
        • Support PCI Express 3.0 x8
        • Single or dual port FDR InfiniBand
        • Single or dual port 40 or 10 Gbps Ethernet
        • Single port FDR IB and single port Ethernet
        • Support the following ProLiant servers: DL160 Gen8 , DL320e Gen8, DL360e Gen8, DL360 G7, DL360p Gen8, DL380e Gen8, DL380 G7, DL380p Gen8, DL385 Gen8, DL560 Gen8, ML350p Gen8, SL140s Gen8, SL230 Gen8, SL250 Gen8, SL270 Gen8
      • HP IB FDR/EN 10/40Gb 2P 544FLR-QSFP Adaptor
        • Dual QSFP ports Dual function HP Flexible LOM Adaptor based on ConnectX-3 technology
        • Support PCI Express 3.0 x8
        • Single or dual port FDR InfiniBand
        • Single or dual port 40 or 10 Gbps Ethernet
        • Single port FDR IB and single port Ethernet
        • Support the following ProLiant servers: DL160 Gen8 , DL360p Gen8, DL380p Gen8, DL385 Gen8, DL560 Gen8, SL230 Gen8, SL250 Gen8, SL270 Gen8
      • HP IB QDR/EN 10Gb 2P 544FLR-QSFP Adaptor
        • Dual QSFP ports Dual function HP Flexible LOM Adaptor based on ConnectX-3 technology
        • Support PCI Express 3.0 x8
        • Single or dual port QDR InfiniBand
        • Single or dual port 10 Gbps Ethernet
        • Single port QDR IB and single port Ethernet
        • Support the following ProLiant servers: DL160 Gen8 , DL360p Gen8, DL380p Gen8, DL385 Gen8, DL560 Gen8, SL230 Gen8, SL250 Gen8, SL270 Gen8
    • InfiniBand HCA cards based on Mellanox technologies
      • HP IB 4X QDR CX-2 PCI-e G2 Dual-port HCA
        • Dual QSFP ports 4X QDR InfiniBand card based on ConnectX-2 technology
        • Support PCI Express 2.0 x8
        • Support the following ProLiant servers:
          • DL160 G6, DL160 Gen8, DL160se G6 (DISC), DL165 G7, DL170e G6, DL170h G6 (DISC), DL180 G6, DL180se G6, DL360 G7, DL360e Gen8, DL360p Gen8, DL380e Gen8, DL380 G7, DL380 G7SE, DL380 G7 X5698, DL380p Gen8, DL385 G7, DL385 Gen8, DL580 G7, DL585 G7, DL980 G7, ML350p Gen8, ML/DL370 G6, SE1170sG6, SE2170sG6, , SL160s G6, , SL160z G6, SL165s G7, SL165z G7, SL170s G6, SL170z G6, SL230 Gen8, SL250 Gen8,, , SL2x170z G6, SL390s G7, SWD X9300 G2, SWD X9300 G3
        • Support Mellanox OFED Linux driver stacks, and Mellanox WinOF on Microsoft Windows HPC server 2008
    • Mellanox Software
      • Mellanox Unified Fabric Manager (UFM)
        • Can run on one server, or on two servers for high availability
        • Physical Fabric topology view
          • Automatic fabric discovery
          • Drill down to each device and port
          • Racking and grouping
        • Central dashboard
          • Health and performance snapshot
          • Congestion analysis map
          • Per logical entity/service bandwidth utilization graph
        • Advanced Monitoring Engine
          • Monitor each device / data counter combination
          • Both health and performance data
          • Aggregate and correlate to physical racks to logical entities
        • Event Management
          • Threshold based alerts
          • Fully configurable - threshold, criticality, action
          • Correlated to physical and to business entities
        • Fabric Health Reports
          • Fabric Health Tab for quick deployment and maintenance analysis
          • Increased fabric robustness
          • Formatted reports for process/improvement management
        • Logical / Service Oriented fabric management
          • Group devices into service oriented entities such as applications or departments
          • Deploy fabric policy such as partitioning and QoS based on application/service needs
          • Aggregate and correlate health and performance data to the service oriented layer
        • Perform tasks on single or multiple devices
          • Establish an SSH session to the device
          • Resetting or shutting down the port or device
          • Manage alarms
          • Perform firmware and software upgrades
      • Mellanox Unified Fabric Manager Advanced (UFM Advanced)
        • Includes all features of Mellanox Unified Fabric Manager
        • Monitoring history
          • Monitor history sessions from the GUI
          • Periodic fabric snapshots
        • Monitoring templates for saving user/task monitoring scenarios
        • Advanced event management
          • Sends SNMP traps to central monitoring systems
          • Activates user made scripts based on system events
        • Advanced multicast optimizations through multicast tree management
        • User management with control and authorization groups
        • Multicast tree routing optimizations
        • User authorization management
      • Mellanox Acceleration Software for Mellanox network adaptors (VMA)
        • Requires ConnectX-3 based 544 FlexibleNetwork Adapters
        • 56G InfiniBand and 10/40Gb Ethernet support
        • PCI Express 3.0 support
        • Comprehensive support for UDP/TCP socket API
        • TCP and UDP unicast and multicast offload
    • InfiniBand HCA cards based on QLogic technology
      • QLogic IB 4X QDR PCI-e G2 Dual-port HCA
        • Dual QSFP ports 4X QDR InfiniBand card based on QLogic TrueScale InfiniBand
        • Support PCI Express 2.0 x8
        • Support the following ProLiant servers:
          • DL160 G6,DL165 G7,DL170e G6,DL170h G6 (DISC),DL180 G6,DL180se G6,DL360 G7, DL380 G7, DL380 G7SE, DL380 G7 X5698, DL385 G7, DL580 G7, DL585 G7, ML/DL370 G6, " SE1170sG6, SE2170sG6, SL160s G6, SL160z G6, SL165s G7, SL165z G7, SL170s G6, SL170z G6, SL2x170z G6, SL390s G7
        • Support QLogic OFED Linux driver stack
    • InfiniBand switches based on Mellanox SwitchX technology
      • Mellanox IB FDR 36-port unmanaged switch
        • 36 FDR QSFP ports, support FDR InfiniBand copper and fiber optic cables, with the front-to-rear cooling fan that has air flow from the front to the rear (ports side.
        • Dual power supplies for redundancy.
      • Mellanox IB FDR 36-port unmanaged switch with reversed airflow fan unit
        • 36 FDR QSFP ports, support FDR InfiniBand copper and fiber optic cables, with the rear-to-front cooling fan that has air flow from the rear (ports side) to the front.
        • Dual power supplies for redundancy.
      • Mellanox IB FDR 36-port managed switch
        • 36 FDR QSFP ports, support FDR InfiniBand copper and fiber optic cables
        • Integrated management module for Fabric Management
        • Dual power supplies for redundancy.
      • Mellanox IB FDR 36-port managed switch with reversed airflow fan unit
        • 36 FDR QSFP ports, support FDR InfiniBand copper and fiber optic cables, with the rear-to-front cooling fan that has air flow from the rear (ports side) to the front.
        • Integrated management module for Fabric Management
        • Dual power supplies for redundancy.
    • InfiniBand switches based on Mellanox SwitchX-2 technology
      • Mellanox IB QDR/FDR10 36-port unmanaged switch
        • 36 QSFP ports, support QDR/FDR10 InfiniBand copper and fiber optic cables, with the front-to-rear cooling fan that has air flow from the front to the rear (ports side). 
        • Dual power supplies for redundancy.
      • Mellanox IB QDR/FDR10 36-port unmanaged switch with reversed airflow fan unit
        • 36 QSFP ports, support QDR/FDR10 InfiniBand copper and fiber optic cables, with the rear-to-front cooling fan that has air flow from the rear (ports side) to the front. 
        • Dual power supplies for redundancy.
      • Mellanox IB QDR/FDR10 36-port managed switch
        • 36 QSFP ports, support QDR/FDR10 InfiniBand copper and fiber optic cables
        • cooling fan that has air flow from the front to the rear (ports side).
        • Integrated management module for Fabric Management
        • Dual power supplies for redundancy.
      • Mellanox IB QDR/FDR10 36-port managed switch with reversed airflow fan unit
        • 36 QSFP ports, support QDR/FDR10 InfiniBand copper and fiber optic cables, with the rear-to-front cooling fan that has air flow from the rear (ports side) to the front. 
        • Integrated management module for Fabric Management
        • Dual power supplies for redundancy.
    • InfiniBand switches based on Voltaire technology
      • Voltaire IB 4X QDR 36-port internally managed switch
        • 36 4X QDR QSFP ports, support 4X QDR InfiniBand copper and fiber optic cables
        • Dual power supplies for redundancy.
      • Voltaire IB 4X QDR 36-port internally managed switch with reversed airflow fan unit
        • 36 4X QDR QSFP ports, support 4X QDR InfiniBand copper and fiber optic cables, with the rear-to-front cooling fan that has air flow from the rear (ports side) to the front.
        • Dual power supplies for redundancy.
      • Voltaire IB 4X QDR 162-port (144 ports fully non-blocking) director switch
        • 11U, 19" rack mountable chassis with up to 4 fabric boards, up to 9 18-port line boards, up to 2 redundant management modules, up to 4 redundant power supplies, and 2 fan units
        • Up to 162 4X QDR QSFP ports (with up to 144 ports fully non-blocking when up to 16 ports are used per line board), support 4X QDR InfiniBand copper and fiber optic cables. 
      • Voltaire IB 4X QDR 324-port director switch
    • InfiniBand / Ethernet Gateway switches based on Voltaire technology
      • Voltaire 34P QDR IB 2P 10G internally managed switch
        • 34 4X QDR QSFP ports, support 4X QDR InfiniBand copper and fiber optic cables
        • 2 10 GbE SFP+ ports, support SFP+ copper cables and SFP+ optical transceivers.
      • Voltaire 34P QDR IB 2P 10G internally managed switch with reversed airflow fan unit
        • 34 4X QDR QSFP ports, support 4X QDR InfiniBand copper and fiber optic cables,
        • 2 10 GbE SFP+ ports, support SFP+ copper cables and SFP+ optical transceivers.
        • With the rear-to-front cooling fan that has air flow from the rear (ports side) to the front.
      • Voltaire 34P QDR IB 2P 10G internally managed switch HPC version
        • 34 4X QDR QSFP ports, support 4X QDR InfiniBand copper and fiber optic cables
        • 2 10 GbE SFP+ ports, support SFP+ copper cables and SFP+ optical transceivers.
        • Lower memory model for HPC clusters
      • Voltaire 34P QDR IB 2P 10G internally managed switch HPC Version with reversed airflow fan unit
        • 34 4X QDR QSFP ports, support 4X QDR InfiniBand copper and fiber optic cables,
        • 2 10 GbE SFP+ ports, support SFP+ copper cables and SFP+ optical transceivers.
        • With the rear-to-front cooling fan that has air flow from the rear (ports side) to the front.
        • Lower memory model for HPC clusters
    • InfiniBand switch based on QLogic technology
      • QLogic IB 4X QDR 36 Port Switch
        • 36 4X QDR QSFP ports, support 4X QDR InfiniBand copper and fiber optics cables.
          NOTE: This switch supports QLogic BLc 4X QDR IB Management Module option.
      • QLogic IB 4X QDR 324 Port Switch
        • QLogic 12800-180 switch chassis with up to 2 redundant management modules, 6 power supplies
        • Up to 324 4X QDR QSFP ports with 18 18-port leaf modules in a 1:1 none oversubscribed bandwidth configuration, or up to 432 4X QDR QSFP ports with 18 24-port leaf modules in a 2:1 oversubscribed bandwidth configuration.
      • QLogic IB 4X QDR 648 Port Switch
        • QLogic 12800-360 switch chassis with up to 2 redundant management modules, 12 power supplies.
        • Up to 648 4X QDR QSFP ports with 36 18-port leaf modules in a 1:1 none oversubscribed bandwidth configuration, or up to 864 4X QDR QSFP ports with 36 24-port leaf modules in a 2:1 oversubscribed bandwidth configuration.
           
    • InfiniBand cables
      • Copper cables from 0.5 meter to 3 meters at FDR speed
      • Fiber optic cables from 3 meters up to 30 meters at FDR speed
      • Copper cables from 0.5 meter to 7 meters at QDR speed
      • Fiber optic cables from 3 meters up to 30 meters at QDR speed
         
    • Ethernet cables
    • The Dual Function cards support most HP Ethernet cables
    • 10 Gb Ethernet requires the use of a QSFP to SFP+ adaptor 655874-B21
    • Copper cables from 0.5 meter to 7 meters at 10 GbE speed
    • Fiber optic transceivers SR and LR at 10 GbE speed

    TOP