|Ports & interfaces|
|Ethernet LAN (RJ-45) ports||2|
|Power consumption (typical)||10 W|
|Storage temperature (T-T)||-40 - 70 °C|
|Operating temperature (T-T)||0 - 55 °C|
|Networking standards||IEEE 802.1p,IEEE 802.1Q,IEEE 802.3ad,IEEE 802.3x|
|Flow control support|
|LAN controller||Intel 82599|
|Ethernet LAN data rates||10,100,1000,10000 Mbit/s|
|Colour of product||Green|
|Intel Virtual Machine Device Queues (VMDq)|
To connect your PRIMERGY Blade Server to outside networks and storage, Fujitsu offers a variety of
options to support your familiar standards such as Ethernet, Fibre Channel, SAS and InfiniBand.
They connect directly to the high performance midplane and guarantees lossless, highly efficient
data transfer to and from the Connection Blades. In order to optimally meet your needs, Fujitsu
provides a wide range of various Mezzanine Cards starting from 1 and 10 Gbit/s Ethernet, 8 and 16
Gbit/s Fibre Channel, 40 and 56 Gbit/s InfiniBand to 6 Gbit/s SAS with RAID functionality. Even
Converged Network Adapters (CNAs) that provide multiple network connections and Fibre Channel
over Ethernet (FCoE) as well as iSCSI are part of the portfolio.
Dual port 10 Gbit/s Ethernet Mezzanine Card Most flexible and scalable Ethernet Mezzanine card for today’s demanding datacenter environment - accelerating PRIMERGY BX900 server blades’ LAN traffic and the herein running mission-critical applications. Just in virtualized environments with a growing number of VMs per physical server, this controller combines provision of unmatched features and reliable performance. With up to 4 physical 10 Gb ports (via two Mezzanine cards) in one dual socket server blade the overall performance boost is dramatic.
Operating in emulation mode in conjunction with the Virtual Switch of a Virtual Machine Manager, the integrated VMDq technology offloads data sorting and copying from the Virtual Switch to the LAN controller; this is the optimal solution for a large number of VMs running standard applications with limited bandwidth and latency requirements. Larger, mission-critical applications on the other hand require dedicated I/O for maximum network performance; they optimally operate using the VMDc feature, allowing data to bypass the Virtual Switch for near native performance.