Download Mellanox Technologies 3m QSFP

Transcript
Corporate Style and Branding Guide
July 2011
CORPORATE Style Guide
Corporate Logo Usage
CORPORATE LOGO/SIGNATURE
SIGNATURE
The Mellanox Technologies signature is used to
represent the organization in all its activities. This
signature is the foundation to our identity and exists on
a wide variety of media and marketing materials.
The Signature consists of two elements:
1. the symbol
LOGOTYPE
2. the logotype
Because the signature is a registered trademark, the
relationship between these elements should never be
altered. This ensure legal protection, builds recognition
and reinforces our positioning.
CORPORATE 2-COLOR LOGO/SIGNATURE
PMS 274 Blue
60% Black
GRAYSCALE LOGO
100% Black
60% Black
2
|
Mellanox Technologies | July 2011
PAGE 1
CORPORATE Style Guide
Corporate Logo Usage
LOGO VARIATIONS
These acceptable variations of the corporate logo
should only be used when space or application dictate
its use.
The horizontal version is used as an EXCEPTION.
CMYK CONVERSION
COLOR
CYAN MAGENTAYELLOW BLACK
Blue 100100 0 28
Gray 0 0 060
RGB CONVERSION
COLOR REDGREENBLUE
Blue 34 31114
Gray 130130130
WEB CONVERSION
COLOR REDGREENBLUE
Blue 221F72
Gray 828282
Mellanox Technologies | July 2011
|
3
CORPORATE Style Guide
Logo Clear Space and Size
LOGO CLEAR SPACE
The signature should always be surrounded by an adequate amount of
clear space in order to set it off from other elements.
The gray area (see illustration at left) indicates the minimum amount of
clear space that must surround the signature in all applications. No other
elements should infringe in the clear space.
Exceptions require approval prior to use.
Minimum clear space is specified in units of “X.” X equals the height of
the “x” in Mellanox.
X
X
LOGO MINIMUM SIZE
Stacked Logo
No smaller than 1”
Horizontal Logo
No smaller than 1.5”
4
|
Mellanox Technologies | July 2011
CORPORATE Style Guide
Logo Usage
LOGO USAGE GUIDELINES
This is the official corporate logo.
The Logo should be used as a single unit. The logotype should
never be used alone, but should always have the logo mark
(bridge).
The signature or logotype should NEVER by used separately.
Do not rearrange or stack the logo mark and
logotype.
Do not change the colors of the logo or logotype.
Mellanox Technologies | July 2011
|
5
CORPORATE Style Guide
Logo Usage
LOGO USAGE GUIDELINES
Do not tilt or skew logo.
Do not enlarge or shrink the logo or the logotype
separately.
Do not place logo on a background or backgrounds
with conflicting colors.
White is the recommended background.
6
|
Mellanox Technologies | July 2011
CORPORATE Style Guide
Secondary and Product Logos
Mellanox product or secondary logos should also be used with the
same requirements as the corporate logo.
PRODUCT/TECHNOLOGY LOGOS
•They should always be surrounded by an adequate amount of
clear space in order to set it off from other elements.
•The Logo should be used as a single unit.
•Do not rearrange or stack the logo mark and logotype.
•Do not change the colors of the logo or logotype.
•Do not tilt or skew logo.
•Do not enlarge or shrink the logo or the logotype separately.
•Do not place logo on a background or backgrounds with
conflicting colors.
•White is the recommended background.
OTHER MELLANOX LOGOS
Mellanox Technologies | July 2011
|
7
CORPORATE Style Guide
Typography
FONTS
Univers LT (True Type Version) is the primary fonts used on all corporate materials.
Arial or Helvetica may be substituted for Univers for online applications and PowerPoint
Univers LT 39 Thin Ultra Condensed
Univers LT 63 Bold Extended
Univers LT 45 Light
Univers LT 63 Bold Extended Oblique
Univers LT 45 Light Oblique
Univers LT 65 Bold
Univers LT 47 Condensed Light
Univers LT 65 Bold Oblique
Univers LT 47 Condensed Light Oblique
Univers LT 67 Bold Condensed
Univers LT 53 Extended
Univers LT 67 Bold Condensed Oblique
Univers LT 53 Extended Oblique
Univers LT 73 Black Extended
Univers LT 55 Roman
Univers LT 73 Black Extended Oblique
Univers LT 55 Oblique
Univers LT 75 Black
Univers LT 57 Condensed
Univers LT 75 Black Oblique
Univers LT 57 Condensed Oblique
Univers LT 85 Extra Black
Univers LT 59 Ultra Condensed
Univers LT 85 Extra Black Oblique
Univers LT 93 Extra Black Extended
Univers LT 95 Extra Black Extended
Oblique
Bell Gothic Std LIGHT
Bell Gothic Std Medium
Bell Gothic STD LIGHT
Bell Gothic STD Medium
Bell Gothic Std typeface is primarily used for Speciality
Headlines and Subheads larger than 11 pt.
Bell Gothic should not be used for body copy or type
smaller than 12 pt unless used in ALL CAPS
Use -30 Horizontal tracking/kerning
LEGACY FONTS
Bank Gothic BT LIGHT
Bank Gothic BT Medium
Bank Gothic BT LIGHT
Bank Gothic BT Medium
8
|
Mellanox Technologies | July 2011
Bank Gothic BT typeface can be used for Speciality
Headlines and Subheads larger than 14 pt. This is a legacy
fonts that has been used prior to 2010, and has been
replaced by Bell Gothic Std. but it is still acceptable to
use.
Bank Gothic should never be used for body copy because
of it’s ALL CAPS nature.
CORPORATE Style Guide
Color Palette
SECONDARY CORPORATE COLORS
COLOR
PMS 138 GOLD
30% BLACK
CYAN
0
0
MAGENTA
30
0
YELLOW
100
0
COLOR
PMS 138 GOLD
30% BLACK
RED
253
190
GREEN
185
190
BLUE
19
190
COLOR
PMS 282 BLUE
PMS 282 BLUE - 60%
PMS 371 GREEN
PMS 159 ORANGE
PMS 284 LIGHT BLUE
PMS 7545 BLUE GRAY
CYAN
100
60
43
0
55
23
MAGENTA
68
40
0
66
19
2
YELLOW
0
0
100
100
0
0
COLOR
PMS 282 BLUE
PMS 282 BLUE - 60%
PMS 371 GREEN
PMS 159 ORANGE
PMS 284 LIGHT BLUE
PMS 7545 BLUE GRAY
RED
0
80
79
227
108
92
GREEN
45
102
111
111
173
111
BLUE
98
147
25
30
223
124
BLACK
0
30
OTHER AVAILABLE COLORS
BLACK
55
32
56
7
0
63
SUGGESTED COLOR COMBINATIONS - 3 color
PMS
274
60%
BLACK
60%
PMS
282
PMS
274
PMS
138
PMS
282
PMS
159
PMS
284
PMS
371
PMS
282
PMS
274
60%
BLACK
SUGGESTED COLOR COMBINATIONS - 4 color
PMS
274
60%
BLACK
PMS
274
60%
BLACK
PMS
274
PMS
282
PMS
371
PMS
282
PMS
138
60%
PMS
282
60%
PMS
282
PMS
371
PMS
284
PMS
159
PMS
7545
PMS
284
Mellanox Technologies | July 2011
|
9
CORPORATE Style Guide
Trademarks, Registered Trademarks and Copyrights
A trademark is a word, phrase, logo, symbol
or design, or a combination of these elements,
used to identify or distinguish the goods and
services of one company or individual from
others.
Using the trademark properly is necessary
in order to demonstrate that a mark is
used in commerce, which is a fundamental
requirement for trademark ownership in the
United States.
Always use the mark as an adjective
It is very important to always use the marks as adjectives and
always with the generic terms which they modify. A trademark
should never be use it as a noun or a verb.
INCORRECT:
ConnectX® is the best adapter on the market..
CORRECT USE:
ConnectX® adapters are the best on the market.
Avoid plural or possessive forms of the mark
TRADEMARKS
Never use a mark in the plural form or the possessive form.
FabricIT
Never hyphenate trademarks
MLNX-OS
INCORRECT:
ConnectX®-compatible adapters are the best on the market.
SwitchX
REGISTERED TRADEMARKS
Mellanox
BridgeX
ConnectX
CORE-Direct
InfiniBridge
InfiniHost
InfiniScale
PhyX
Virtual Protocol Interconnect
Voltaire
CORRECT USE:
“ConnectX® compatible adapters are the best on the market.
Appropriate placement of trademarks within text
Generally, the mark must be used with the first or most prominent
appearance of a trademark in a publication or document, but need
not be used with each subsequent appearance. As a safeguard,
use additional markings rather than fewer within a document.
Trademark Attribution Statement
For publications containing third party trademarks, it is typical
practice to provide a trademark attribution statement in small print
at the end of the specific article. Below is the trademark statement
that should be on all Mellanox marketing collateral.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost,
InfiniScale, PhyX, Virtual Protocol Interconnect and Voltaire are
registered trademarks of Mellanox Technologies, Ltd. FabricIT,
MLNX-OS and SwitchX are trademarks of Mellanox Technologies,
Ltd. All other trademarks are property of their respective owners.
Copyright Notice
The notice should always contain:
1. The symbol © (the letter C in a circle), or the word “Copyright”;
2. The year of first publication of the work; and
3. The name of the owner of copyright in the work.
EXAMPLE: © 2011 Mellanox Technologies.
10
|
Mellanox Technologies | July 2011
CORPORATE Style Guide
Documents and Marketing Collateral
!"#"$%&&#
PRINT COLLATERAL
>)0'%"
The following documents have
Mellanox has many types of marketing collateral,
signage, multimedia
="55"64;'E6FG5%"4$G6
$!5!'')*25--%0!5*++!,5%#55'!-5%)'/
!553!,5
pre-established templates:
and technical documents. All collateral should follow corporate branding
(1L16'8%/)58A)5'%A)55)*6S\%A/1./%.+7'52%5'>)15%+5%5'>().'L'*63
guidelines, colors, fonts and established templates
when
available.
> Product Briefs
:88$4$G6"2'E6FG5%"4$G6
*,5(*,!5%)"*,(.%*)5*/.5!'')*255'!5-*'/.%*)-5+'!-!5
Detailed specifications for the Product Briefs and
Case Studies and can be
.+*6).6%S+@5%&'(()*+,%-'./*+(+01'2%2)('2%5'>5'2'*6)617'%+5%71216]%
> Solution Briefs
/66>]OOAAA3L'(()*+,3.+L^[email protected]^_G)?('2^_G)?('2%K7'571'A
found on starting page 13. All documents follow
similar layouts, fonts
> Case Studies
and specifications. Samples without detailed specifications only indicated
attributes that are different or unique for that document.
> White Papers
Some of the standard document element are in an InDesign library.
> Datasheets
> Technology Sheets
!"#$%&'()'
> Reference Guides
> Product Brochures
All standard documents should be created and maintained in Adobe InDesign
unless otherwise indicated.
Document Numbers
The following documents should always contain a document number on the
last page in the lower right corner. They should follow this format:
Product Briefs = XXXXPB Rev 1.0
Solution Brief’s = XXXXSB Rev 1.0
White Paper = XXXXWP Rev 1.0
Case Studies = XXXXCS Rev 1.0
Data Sheets= XXXXDS Rev 1.0
!"#$%&'*)'
!"#$%&'()&*$+&,'-&./$0123)$4##/$0155.6&7)/$89$:;#<"
!%G+>S510/6%"#$$3%&'(()*+,%-'./*+(+01'23%4((%510/62%5'2'57'83
!'')*25,% #!5*))!.5%,!.5)4)%'-.5)4)%,% #!5)4)%*-.5)4)%5)4)%'!5)4)%5$35%,./'5,*.**'5).!,*))!.5) 5*'.%,!5,!5,!#%-.!,! 5
., !(,&-5*"5!'')*25!$)*'*#%!-5. 5,%5%-55., !(,&5*"5!'')*25!$)*'*#%!-5. 5
''5*.$!,5., !(,&-5,!5+,*+!,.35*"5.$!%,5,!-+!.%0!5*1)!,-
5!05
Illustrations and Multimedia
Mellanox has developed illustrations and icons to be used in PPT and other
collateral. In order to maintain consistency in imagery it is important to use the
established illustrations and icons whenever possible.
Illustration and icon examples can be found on starting on page 22.
Mellanox also utilizes Flash, animation and video for marketing purposes. It is
also recommended that the corporate branding, colors and fonts be used as
much as possible.
Mellanox Technologies | July 2011
|
11
CORPORATE Style Guide
Documents and Marketing Collateral
Color Designations
Some of the Mellanox documentation and marketing collateral follow a color convention for specific product families or
technologies. Below is the color designation and the product area associated with the color.
Mellanox Product Briefs and Product Brochures follow the color convention.
PMS 282
PMS 371
Switches, Gateways,
Switch Silicon, Gateway Silicon, Phy Silicon
Ethernet and InfiniBand Adapter Cards,
Ethernet Adapter Silicon
ETHERNET
ADAPTER CARDS
SWITCH SILICON
PRODUCT BRIEF
PRODUCT BRIEF
SwitchX™ Virtual Protocol Interconnect®
Converged Switch-Silicon IC Device
ConnectX -2 EN 40G
®
36-port Switch-Router Device Provides Leading FDR 56Gb/s InfiniBand,
40 Gigabit Ethernet and Fibre Channel Throughput
40 Gigabit Ethernet Adapter with PCI Express 2.0
Mellanox continues its connectivity leadership by providing the highest
performing server and storage system interconnect solution for Enterprise
Data Centers, Cloud Computing, High-Performance Computing, and
embedded environments.
SwitchX, the fifth generation switching IC from
Mellanox, further improves the company’s tradition of leading bandwidth, low-latency, and scalability by integrating Virtual Protocol Interconnect
(VPI) technology - allowing InfiniBand, Ethernet
and Fibre Channel traffic to exist on a single
‘one-wire’ fabric. The industry-leading integration of 144 SerDes which are flexible enough
to run 1 Gigabit to 14 Gigabit speeds per lane,
independent of network protocol, makes SwitchX
an obvious choice for OEMs that must address
end-user requirements for faster and more robust
applications. Reduced power, footprint and a fully
integrated PHY capable of connectivity across
PCBs, backplanes as well as passive and active
copper/fibre cables allow interconnect fabrics
based on SwitchX to be utilized by network architects deploying leading, fabric-flexible computing
and storage systems with the lowest TCO.
FDR
InfiniBand FDR technology moves from 8b10b
encoding to more efficient 64/66 encoding while
increasing the per lane signaling rate to 14Gb/s.
Using FDR 4x ports and Mellanox ConnectX®
family of InfiniBand adapters, users can increase
end-to-end bandwidth up to 85% over previous
QDR solutions. Mellanox end-to-end systems can
also take advantage of the efficiency of 64/66
encoding on 4x10Gb/s links using Mellanox
FDR-10.
Mellanox VPI™ I/O
Up to 36 Port FDR/40GigE - 64 Port 10/20GigE - 24 Port 2/4/8 Gig FC
Mellanox Integrated Flex SerDes
Active/Passive Copper - Fibre - PCB - Backplane
L2/L2+
L3
Adaptive
Routing
Mellanox IRISC Processing Engine
Congestion
Control
QoS
Routers/
Gateways
– Industry-leading Cut Through Latency
– I/O Consolidation
•
•
IB and IP Routing
Ethernet and FC Gateways
– Virtualization Support
•
VEB, VEPA (+), Port Extender
– Data Center Bridging (DCB)
•
PFC, DCBX, ETS
– Low Cost Solution
•
Single-Chip Implementation
InfiniBand Switch System Family
– Fully Integrated Phy
•
•
Backplane and cable support
1, 2 and 4 Lane
– Up to 4Tb/s switching capacity
– Flexible Port Configurations
Up to 36, 56Gig IB, 40GigE Ports
• Up to 64 10GigE Ports
• Up to 24 2/4/8Gig FC Ports
•
–
–
–
–
–
Adaptive Routing
Congestion Control
Quality of Service 802.1p, DIFFSERV
Switch Partitioning
Multichip Support
All ports support stacking
• Management across multiple devices
•
SwitchX™
Non-blocking VPI™ Switch-Router-Gateway
Virtualization
– Energy Efficient Ethernet
– IEEE 1588 Clock Synchronization
– Active Power Governor
Management
Dual SGMII
ConnectX-2 EN 40 Gigabit Ethernet Network Interface Card (NIC) delivers
breakthrough connectivity for servers and storage. IT managers will be
able to build out high-bandwidth fabrics for next generation data centers
enabling the highest platform efficiency and CPU utilization. Virtualized
environments will benefit as well from the hardware-based accelerations
enabling more virtual machines per server and superior application
performance.
HIGHLIGHTS
PCIe
Highest Levels of Scalability,
Simplified Network Manageability,
Maximum System Productivity
The single port ConnectX-2 EN 40G adapter
comes with a QSFP connector suitable for use
with copper or fiber optic cables.
link-level interoperability with existing Ethernet
networks, IT managers can leverage existing data
center fabric management solutions.
World-Class Ethernet Performance
ConnectX-2 EN 40G with PCI Express 2.0 delivers
industry-leading Ethernet bandwidth required by
multi-core, multi-socket platforms. Hardwarebased stateless offload engines handle the TCP/
UDP/IP segmentation, reassembly, and checksum
calculations that would otherwise burden the
host. Total cost of ownership is optimized by
maintaining an end-to-end Ethernet network on
existing operating systems and applications.
I/O Virtualization
ConnectX-2 EN 40G supports hardware-based
I/O virtualization, providing dedicated adapter
resources and guaranteed isolation and protection for virtual machines (VM) within the server.
Mellanox
ConnectX-2 EN 40G gives data center managers
better server utilization and LAN and SAN unification while reducing costs, power, and complexity.
Converged Ethernet
ConnectX-2 EN 40G delivers the features needed
for a converged network with support for Data
Center Bridging (DCB). Fibre Channel frame
encapsulation compliant with T11 and hardware
offloads simplifies FCoE deployment. RDMA over
Converged Ethernet (RoCE) running over DCB
fabrics provides efficient low latency RDMA
services. The RoCE software stack maintains
existing and future compatibility with bandwidth
and latency sensitive applications. By maintaining
HIGHLIGHTS
BENEFITS
– 40Gb/s connectivity for servers and
storage
– Industry-leading throughput and latency
performance
– I/O consolidation
– Virtualization acceleration
– High-performance networking and
storage access
10
and compatible
40 Gigabit
Ethernet
– Software
with standard
TCP/UDP/IP and iSCSI stacks
KEY FEATURES
– Single 40GigE port
Flexible I/O for the Dynamic Data Center
Quality of Service
Resource allocation per application or per VM
is provided by the advanced QoS supported by
ConnectX-2 EN 40G. Service levels for multiple
traffic types can be based on IETF DiffServ or IEEE
802.1p/Q, along with the Data Center Bridging
enhancements, allowing system administrators
to prioritize traffic by application, virtual machine,
or protocol. This powerful combination of QoS
and prioritization provides the ultimate fine-grain
control of traffic – ensuring that applications run
smoothly in today’s complex environment.
•
–
–
–
–
–
–
–
–
–
–
–
Converged Network Adapters
QSFP for copper or fiber connections
PCI Express (up to 5GT/s)
RDMA over Converged Ethernet (RoCE)
Data Center Bridging support
T11 FCoE frame support
TCP/UDP/IP stateless offload in
hardware
Traffic steering across multiple cores
Hardware-based I/O virtualization
Intelligent interrupt coalescence
Advanced Quality of Service
Interoperable with 10GigE fabrics,
using QSA
RoHS-R6
Figure 1: Mellanox SwitchX Architecture
©2011 Mellanox Technologies. All rights reserved.
©2011 Mellanox Technologies. All rights reserved.
PMS 159
PMS 7545
Software
Cables, Modules
SOFTWARE
CABLES
PRODUCT BRIEF
PRODUCT BRIEF
MLNX-OS
Active Optical Modules
TM
Integrated Switch Management Solution
Overview
Mellanox active optical modules provide outstanding performance for high-bandwidth fabrics, extending
the benefits of Mellanox’s high-performance InfiniBand and 10 Gigabit Ethernet adapters throughout the
network. In addition to meeting or exceeding IBTA and IEEE standards, Mellanox certified modules are
100% tested on Mellanox equipment to ensure optimal signal integrity and the best end-to-end performance. Mellanox QSFP module uses MTP/MPO connector type to connect an OM3/OM4 fiber cable.
Mellanox SFP+ modules use LC connector type to connect to single mode or multi mode fiber cable.
MLNX-OS™ is a switch-based software tool that enables the
management and configuration of Mellanox SX6000 switch platforms
providing optimal performance for cluster computing, enterprise data
centers (EDC) and cloud computing.
The switch management capabilities ensures
the highest fabric performance while the chassis
management ensures the longest switch up time.
With MLNX-OS running on Mellanox’s switches,
IT managers will see a higher return on their
compute as well as infrastructure investment
through higher CPU productivity due to higher
network throughput and availability.
Switch Chassis Management
Mellanox advanced chassis management software
provides all the parameters and information IT
manager will need including: port status with
event and error logs, graphical CPU load display,
graphical fan speed over time display, power
supplies voltage alarms, graphical internal
temperature display with alarm notification and
more. These chassis management capabilities will
ensure low switch maintenance and high network
availability.
Subnet Management
MLNX-OS subnet management provides a
reliable and scalable management solutions
for cluster and data center fabrics. Its modular
design integrates the subnet manager (SM) with
advanced features simplifying cluster bring up and
node initialization through automatic discovery
and configuration. The performance monitors
measure the fabric characteristics to ensure the
Mellanox optical solutions provide short, medium and long range scalability for all topologies, utilizing
innovative optical technologies to enable extremely high signal integrity and reliability. Rigorous production testing ensures best out-of-the-box installation experience, performance and durability.
highest effective throughput. For ease of use,
switch management is differentiated into basic,
advanced and expert modes where support for
various routing algorithms, QoS attributes and
fabric topologies are included.
World-Class Design
MLNX-OS software includes: CLI, WebUI, SNMP
and XML gateway interfaces. The XML Gateway
provides an XML request-response protocol that
can be used by end-user tools to get and set
management information on the appliance. The
service can be accessed over HTTP / HTTPS or
over SSH. The management also enables the user
to store all data into defined logs, define e-mail
alerts and security capabilities such as: RADIUS,
TACACS+, AAA and LDAP.
Mellanox Advantage
Mellanox Technologies is a leading supplier of
end-to-end servers and storage connectivity solutions to optimize data center performance and
efficiency. Mellanox InfiniBand adapters, switches,
and software are powering Fortune 500 data
centers and the world’s most powerful supercomputers. The company offers innovative solutions
that address a wide range of markets including
HPC, enterprise, mega warehouse data centers,
cloud computing, Internet and Web 2.0.
Table 1 - Module Specifications and Ordering Information
Form Factor
HIGHLIGHTS
– Embedded Subnet Manager (SM)
– Reliable and scalable architecture
supporting up to 648 nodes
– Accelerated fabric design and
installation
– “out of the box” experience
– In-band and out-band support for
standalone or remote configuration
with secure access
– Hardware monitoring and alarms
– Performance monitoring
– Quality of Service based on traffic
type and service levels
– CLI, SNMP, WebUI and XML gateway
user interfaces
– E-mail alerts
Targeted Application
Max Power Consumption [W]
QSFP
40Gb/s, QDR, InfiniBand
1.7
2.5
10.0
MFM4R12C-MFM
SFP/SFP+
10GigE, 10GBase-SR
1
9.95
10.5
MFM1T02A-SR
SFP/SFP+
10GigE, 10GBase-LR
1
Data Rate [Gb/s]
Min | Max
9.95
10.5
MFM1T02A-LR
Part Number
Table 2 - Absolute Maximum Ratings
Parameter
3.45V
0°C to 70°C
-10°C to 75°C
Table 3 - Operating Conditions
Parameter
HIGHLIGHTS
Uncompromising
Integrity
– Low powerfull
consumption
Achieve
performance from your network equipment
Range
Supply Voltage
Operating Case Temperature
Storage Ambient Temperature
Minimum
Typical
Maximum
Supply Voltage
3.15
3.3
3.45
Input Voltage
-0.3
3.6
V
Humidity (RH)
5
85
Units
%
V
–
–
–
–
–
–
Compliant with SFF-8436
Compliant with SFF-8431
Compliant with SFF-8432
Low insertion loss
BER Better than 10-15
Tested in Mellanox’s end-to-end
environment
– RoHS 6 compliant
Warranty Information
The Mellanox optical modules include a 1-year limited hardware warranty, which covers parts repair or
replacement.
Additional Information
For more information about Mellanox cable solutions, please contact your Mellanox Technologies sales
representative or visit: http://www.mellanox.com=>Products=>Cables=>Cables Overview
350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085
Tel: 408-970-3400 • Fax: 408-970-3403
www.mellanox.com
©2011 Mellanox Technologies. All rights reserved.
12
|
Mellanox Technologies | July 2011
© Copyright 2011. Mellanox Technologies. All rights reserved.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, InfiniPCI, PhyX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd.
FabricIT is a trademark of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
CORPORATE Style Guide
Documents and Marketing Collateral
PRODUCT BRIEF SPECIFICATIONS - First Page
All rounded corners shoud be .125” radius. All image boxes should be .5 pt, 100% black
01 Key Features/Apps
Univers LT 47 CondensedLt
9.5 pt /11.5
Align Left
Style Sheet Name in Template
Font
Font Size / Leading
Alignment
Item should be on Master page
.5”
.75”
01 Folio2
Bell Gothic Std - Bold
ALL CAPS, 11 pt
Centered, White
01 Main Title
Univers LT 59 UltraCond
32 pt / 28
@ 3.25”
Align Left, Black
Dotted Rule below
01 Main Title 2
Univers LT 45 Light
14 pt / 15
Align Left, Black
02 Body Text
Univers LT 47 CondensedLt
10 pt / 12.75
Align Left, Black
02 Body Heading1
Univers LT 59 CondensedLt
Bold
10.5 pt / 12
.02” Space Before
Align Left, Color
05 Copyright
Univers LT 47
32 pt / 28
Align Left, Black
Dotted Line 3.25” from top
Width = 2.25”
SWITCH SYSTEM
01 Folio
Bell Gothic Std - Bold
ALL CAPS, 14 pt
Centered, White
PRODUCT BRIEF
IS5030
2” from top
Logo Placement
X = 6.75”
Y = .5”
Product Image Area
Can overlap dotted line
36-port Non-blocking Managed 40Gb/s InfiniBand Switch System
The IS5030 switch system provides the highest-performing
fabric solutions in a 1RU form factor by delivering 2.88Tb/s of
non-blocking bandwidth to High-Performance Computing and
Enterprise Data Centers, with 100ns port-to-port latency.
Built with Mellanox’s 4th generation InfiniScale®
InfiniBand switch device, the IS5030 provides up
to 40Gb/s full bidirectional bandwidth per port.
This stand-alone switch is an ideal choice for
top-of-rack leaf connectivity or for building small
to medium sized clusters. The IS5030 is designed
to carry converged LAN and SAN traffic with the
combination of assured bandwidth and granular
Quality of Service (QoS).
Sustained Network Performance
The IS5030 enables efficient computing with
features such as static routing, adaptive routing,
and congestion management. These features
ensure the maximum effective fabric bandwidth
by eliminating congestion hot spots. Whether
used for parallel computation or as a converged
fabric, the IS5000 family of switches provides the
industry’s best traffic-carrying capacity.
World-Class Design
The IS5030 is an elegant top of the rack design,
that are designed for performance, serviceability,
energy savings and high-availability. The design is
optimized for fitting into industry-standard racks
and also for scale-out computing solutions from
industry leaders. The IS5030 supports reversible
airflow making the design fit into data centers
with different thermal designs. Redundant and
hot swappable power supplies and fans provide
high availability for both High-Performance
Computing and Enterprise Data Center applica-
tions. Status LEDs for fans and power supply units
are placed on both sides of the system for easy
status identification. Quick start guide, rack mount
kit, AC power cord and RJ45 to DB9 console
cables are provided as part of the standard
system providing new levels of out of the box
experience and performance.
Management Capabilities
The IS5030 comes with an onboard subnet
manager, enabling simple, out-of-the-box fabric
bring up for up to 108 nodes. FabricIT™ fabric
management running on the IS5030 enables fast
setup for cluster bring-up, fabric-wide bandwidth
and latency verification, and various diagnostic
tools to check node-node, node-switch connectivity and cluster topology view. FabricIT™ chassis
management provides administrative tools to
manage the firmware, power supplies, fans, ports,
and other interfaces.
02 Intro Body
Univers LT 47 CondensedLt
14 pt /18
Align Left, 85% Black
IS5030
HIGHLIGHTS
BENEFITS
– Industry-leading, switch platform in
performance, power, and density
– Designed for energy and cost savings
– Ultra low latency
– Granular QoS for Cluster, LAN and
SAN traffic
FEATURES
–KEY
Maximizes
performance by removing
fabric congestions
– Fabric Management for cluster and
converged I/O applications
KEY FEATURES
– 36 QDR (40Gb/s) ports in a 1U switch
2.88Tb/s switching capacity
– Quality of Service enforcement
– Redundant power supplies and fan
drawers
MANAGEMENT
– Fast and efficient fabric bring-up
– Comprehensive chassis management
– Intuitive CLI and GUI for easy access
01 Key Feature Head p1
Bell Gothic Std - Bold
ALL CAPS,12 pt
Centered, White
01 Key Features/Apps
01 Key Features/Apps L2
01 Key Feature Head 2
Univers LT 47 CondensedLt
9.5 pt /11.5
Align Left, Black
Dotted Line
1 pt.
Black, dotted
.125” corners
©2011 Mellanox Technologies. All rights reserved.
Main content area is 2.5” (2 columns starting at .5”
and extending to 6.5”). Gutter is .25”
Highlights box started
at 6.75” and bleeds to
right edges
.1875”
100% PMS 282
Mellanox Technologies | July 2011
|
13
CORPORATE Style Guide
Documents and Marketing Collateral
PRODUCT BRIEF SPECIFICATIONS - Last Page
All rounded corners should be .125” radius. All image boxes should be .5 pt, 100% black
01 Key Features/Apps
Univers LT 47 CondensedLt
9.5 pt /11.5
Align Left
Style Sheet Name in Template
Font
Font Size / Leading
Alignment
Product Name
Univers LT 55 Bold
8.25 pt/9.5
.875”
.5”
Virtual Protocol Interconnect (VPI)
SwitchX VPI devices enable industry standard
networking, clustering, storage, and management protocols to seamlessly operate over a
single “one-wire” converged network. With
auto-sense capability, each SwitchX port can
identify and operate InfiniBand, Ethernet,
Data Center Bridging (DCB) or Fibre Channel
protocol. Combined with Mellanox’s ConnectX
family of VPI adapters, on-the-fly fabric repurposing can be enabled for Cloud, Web2.0, EDC
and Embedded environments providing “future
proofing” of fabrics indpendent of protocol.
02 Body Heading1
Univers LT 59 CondensedLt
Bold
10.5 pt / 12
.02” Space Before
Align Left, Color
Configurations
SwitchX allows OEMs to deliver:
– 36 Port 1U FDR IB switch
– 36 Port 40GigE or 64 Port 10GigE L2, L2+
and L3 switch
– 48 Port 10GigE to 12 Port 40GigE Top-ofRack switch
– Blade switches for converged fabrics
– (16 - 40GigE to servers, 12 - 10GigE to
LAN, 8 - 8G FC to SAN and 2 - 40GigE
stacking ports)
– Modular switch chassis up to 648 56G
IB/40GigE ports
Switch Product Development Platforms
The SwitchX Evaluation Board (EVB) and
Software Development Kit (SDK) are available to accelerate OEMs’ time to market
and for running benchmark tests. These rack
mountable systems are available with a mix
of QSFP and SFP+ connector for verifying
InfiniBand, 10GigE, 40GigE and Fiber Channel
functionality. In addition, SMA connectors are
available for SerDes characterization.
page 2
HaRDWaRE
INFINIBAND
– IBTA Specification 1.2.1 compliant
– 10, 20, 40 or 56Gb/s per 4x port
– Integrated SMA/GSA
– Hardware-based congestion control
– 256 to 4Kbyte MTU
– 9 virtual lanes: 8 data + 1 management
ENHANCED INFINIBAND
– IB to IB Routing
– Hardware-based adaptive routing
– Up to 8 subnets/switch partitions
– Fine grained end-to-end QoS
– Port mirroring
– Supports Jumbo frameup to 10KB
EThERNET
– 1, 10, 20 and 40Gb/s
– DCB (PFC, ETS, DCBX)
– FCoE
Ordering Part Number
COmPaTIBILIITy
CPU
– PowerPC, IntelX86, AMDX86 and MIPS
PCI EXPRESS INTERFACE
– PCIe Base 2.0 compliant, 1.1 compatible
– 2.5GT/s or 5GT/s link rate x4
CONNECTIVITY
– Interoperates with InfiniBand, Ethernet and
Fiber channel adapters and switches
– Drives active/passive copper cables, fiber
optics, PCB or backplanes
MANAGEMENT AND TOOLS
– Supports Mellanox UFM and IBTA compliant
Subnet Managers
– Diagnostic and debug tools
I/O SPECIFICaTIONS
– 36 4x SDR/DDR/QDR/FDR InfiniBand ports,
36 40GigE ports, 64 10/20GigE ports, 24
2/4/8Gig FC ports or a combination of port
types
– PCI Express 2.0 x4 5GT/s (1.1 compatible)
– SPI Flash interface, I2C
– IEEE 1149.1 boundary-scan JTAG
– Link status LED indicators
– General purpose I/O
– 45 x 45mm FCBGA
InfiniBand 4X Port Speed
Typical Power
MT51236A0-FCCR-F
SwitchX, 36 Port FDR Switch IC
72W
MT51336A0-FCCR-F
SwitchX 36 Port VPI FDR/40GigE Switch IC
72W
MT51336A0-FCCR-Q
SwitchX 36 Port VPI QDR/10GigE Switch IC
55W
Mellanox Advantage
Mellanox is the leading supplier of industry
standard InfiniBand and Ethernet HCA, NIC
and switch silicon. Our products have been
deployed in clusters scaling to thousands-of
nodes and are being deployed end-to-end in
data centers and Top500 systems around the
world.
Dotted Line
1 pt.
Black, dotted
.125” corners
01 Key Feature Head p1
Bell Gothic Std - Bold
ALL CAPS,12 pt
Centered, White
01 Key Features/Apps
01 Key Features/Apps L2
01 Key Feature Head 2
01 Key Feature Head 2b
Univers LT 47 CondensedLt
Univers LT 47 CondensedLt Bold
9.5 pt /11.5
Align Left, Black
04 Table Heading
Univers LT 47 CondensedLt Bold
8.5 pt /9
Centered, Black
Table Rules
.5 pt
60%
No rule on left and
right outside columns
04 Table Text - FL
04 Table Text - Ctr
Univers LT 47 CondensedLt Bold
8.25 pt /9.25
80% Black
350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085
Tel: 408-970-3400 • Fax: 408-970-3403
www.mellanox.com
Corporate Address
Univers LT 57 Condensed
9 pt/12
© Copyright 2011. Mellanox Technologies. All rights reserved.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, InfiniPCI, PhyX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd.
FabricIT is a trademark of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
XXXXPB Rev 1.0
.5”
1.5”
|
03 Page No.
Univers LT 47 CondensedLt
8 pt/9.5
01 Second Title
Univers LT 55
8.25 pt/9.5
SwitchX™ Virtual Protocol Interconnect® Converged Switch-Silicon IC Device
02 Body Text
Univers LT 47 CondensedLt
10 pt / 12.75
Align Left, Black
14
Item should be on Master page
Mellanox Technologies | July 2011
03 Doc No.
Univers LT 47 CondensedLt
5 pt /5
Align Left, 80% Black
03 Copyright
Univers LT 47 CondensedLt
6 pt /7
Align Left, Black
CORPORATE Style Guide
Documents and Marketing Collateral
CASE STUDIES
.5”
.75”
.5”
Width = 2”
.5”
CASE STUDY
2”
01 Main Title
Univers LT 59 UltraCondensed
Regular
INDUSTRY ICON
28 pt/30
02 Intro
Univers LT 47 CondensedLt
Regular
13 pt/17
PMS 280
1.25”
Down-to-Earth Values, ROI Up in the Clouds
Featuring Mellanox Interconnect Products for the Enterprise Data Center
Mellanox server and storage interconnect products deliver down
to earth values that relieve the most critical pain points in the data
center today.
To be precise:
1. If you are virtualizing your servers, Mellanox
simply delivers you more virtual machines
per server
2. If you are connecting your servers to
the SAN and LAN, Mellanox reduces
the number of I/O adapters, cables and
switches
3. If you are running data warehousing, OLTP,
or financial services applications, Mellanox
delivers more transactions per second
using fewer servers, helping you make
faster and better decisions
4. If you are a cloud appliance or services
provider, Mellanox reduces equipment
acquisition costs dramatically while
improving your ability to exceed your SLAs
with customers by a handsome margin
5. Mellanox delivers all of the above benefits
while reducing your power consumption
for IO by 30% to 50%
02 Body Copy
Univers LT 47 CondensedLt
9.5 pt/12
01 Main Title 2
Univers LT 45 Light
14 pt/16 pt
Tracking: -10 pt
These are a few examples of how Mellanox
is helping change the face of the data center
with high performance, scalable and efficient
server and storage connectivity products.
Below are five proof points relating how
fortune 500 customers are availing these
benefits to save money, build more efficient
and greener datacenters thereby improving
their competitive edge through better service
delivery to their end users and customers.
Case Study 1: Managed Hosting Services
Provider
60% more VMs per server --> 60%+ ROI benefits
The customer is a large managed hosting
services provider servicing web based travel
related transactions for their customers. The
volume of transactions per day is greater than
that of Amazon.com, requiring the customer
to manage its data center efficiently and scale
in a cost effective way. One of its datacenter
components serviced customers through the
use of applications in 256 virtual machines,
which prior to using Mellanox-based
solutions, required the use of 4 racks of 4U
servers, 16 edge switches, and 192 I/O cards.
The capital cost to build out that infrastructure
was $744,000. With Mellanox-based
products, without requiring any changes to
their applications or affecting services to its
customers, the new component build-out
required only $347,000, saving $397,000 in
capital expenses. The cost savings came
from the use of 1 rack of 1U servers, 2 I/O
Directors and 32 I/O cards that could still
support the 256 virtual machines. Operating
expenses came down through reduced floor
space and power requirements, bringing total
savings to more than $500,000 (amortized
over 3 years).
.
OVERVIEW
Mellanox is helping change
the face of the data center
with high performance,
scalable and efficient server
and storage connectivity
products.
03 Callout
Univers LT 45 Light
Italic
10 pt/16 pt
PMS 280
©2011 Mellanox Technologies. All rights reserved.
2.125”
6.0” - 2 column format, .1875” gutter
!"#$%&'()*%7,%82392.$%!1:;".*
!"#$%&'()*%G,%H2.".52"9%&$3425$#%!1:;".*
Fewer I/O adapters, cables and switches – 80% TCO
savings
82% more transactions/second at 70% lower costs and 3
times less power
!>)*#<2]"'#2)'5)7%#("<&270#F2*#1*)78#^)82$)4#[4>%'7%4#
X^)8[Y#27=#Z)$'%#\>277%5#XZ\Y#.DE#2=2&4%'#(2'=*#27=#
*F)4(>%*#)7#4>%#=242(%74%'#F>)(>#("<&')*%=#"/#,?#
_H#O52=%#[7(5"*1'%*#F)4>#SP#_H#O52=%#@%'3%'*-#[2(>#
(>2**)*#)7(51=%=#J#Z\#&"'4*#27=#,`#^)8[#&"'4*-#!>%0#
F%'%#("77%(4%=#1*)78#J`#O'"(2=%#W\@Q$2*%=#&"'4*#
27=#?,P#\)*("#\24250*4#PLMSQ$2*%=#&"'4*#'%*&%(4)3%50-#
_H#a)'4125#\"77%(4#<"=15%*#27=#5)(%7*%*#F%'%#1*%=#4"#
=%5)3%'#3)'4125)6%=#.DE#*%'3)(%*-#V>%7#4>%#Z\#27=#^)8[#
&"'4*#F%'%#'%&52(%=#$0#:%5527";Q$2*%=#17)/)%=#.DE#
&"'4*9#%2(>#_H#O52=%#(>2**)*#("15=#7"F#="#F)4>#]1*4#J#
:%5527";Q$2*%=#.7/)7)O27=#.DE#&"'4*#F)4>"14#'%K1)')78#
270#(>278%#4"#1*%'#2&&5)(24)"7*#27=#%;&%')%7(%-#B"F9#
"750#J#O'"(2=%#W\@Q$2*%=#&"'4*#27=#,P#\)*("#\24250*4#
PLMSQ$2*%=#&"'4*#2'%#7%%=%=-#!>)*#'%*154%=#)7#(2&)425#
%;&%7=)41'%#*23)78*#"/#T,-?:#27=#27"4>%'#T?MM9MMM#)7#
"&%'24)"725#%;&%7*%#*23)78*#/'"<#/5""'#*&2(%#27=#&"F%'#
'%=1(4)"7*#X2<"'4)6%=#"3%'#G#0%2'*Y-
b2&)=#8'"F4>#"/#=242#"7#(2&)425#<2'I%4*#27=#)7('%2*)78#
1*%#"/#214"<24%=#4'27*2(4)"7*#*1(>#2*#258"')4><)(#
4'2=)78#)*#)7('%2*)78#4>%#&'%**1'%#"7#/)727()25#)7*4)414)"7*#
F"'5=F)=%#4"#1&8'2=%#4>%)'#.!#)7/'2*4'1(41'%*#4"#*420#
("<&%4)4)3%-#A7#%;2<&5%#"/#*1(>#27#2&&5)(24)"7#1*%=#
$0#4>%#52'8%#/)727()25#)7*4)414)"7#)7#4>)*#(2*%#*41=0#)*#4>%#
b%14%'*#:2'I%4#W242#@0*4%<#Xb:W@Y#F>)(>#*%'3%*#
2*#2#&524/"'<#/"'#<2'I%4#=242#=)*4')$14)"7-#.7#%2(>#"/#
4>%#(')4)(25#<%4')(*#4>24#>23%#2#=)'%(4#$%2')78#"7#4>%#
&%'/"'<27(%#"/#4>%#/)727()25#)7*4)414)"79#:%5527";#
&'"=1(4*#=%5)3%'#%;(%55%74#3251%#c#`?N#>)8>%'#1&=24%*#
&%'#*%("7=9#P?N#5"F%'#<%27#524%7(0#24#UMN#5"F%'#
("*4*#27=#G#4)<%*#5"F%'#&"F%'#("7*1<&4)"7-#
!"#$%&'()*%<,%=$##%'>$%>"9?%'>$%.(:@$3%1?%#$34$3#%
)$924$3%'>$%#":$%'3".#"5'21.%3"'$%A%B7CD-%E!F%
#"42./#
02 Body Heading 2
Univers LT 47 CondensedLt,
Bold
10 pt/11.5
See Product Brief Specs
63% more transactions/second with half the servers –
$2.6M TCO savings
!>)*#3%'0#52'8%#&2(I28%#*>)&&)78#("<&270#2&&5)%=#
:%5527";#.7/)7)O27=Q$2*%=#*%'3%'#4"#*%'3%'#)74%'("77%(4#
&'"=1(4*#X2=2&4%'*#27=#*F)4(>%*Y#/"'#&'"(%**)78#E'2(5%#
bA\#=242$2*%Q$2*%=#)73")(%#&'"(%**)78#F"'I5"2=*-#O0#
=")78#*"9#4>%#("<&270#F2*#2$5%#4"#2(>)%3%#PGN#<"'%#
)73")(%#&'"(%**)78#4'27*2(4)"7*#&%'#*%("7=9#255"F)78#
)4#4"#*%'3)(%#)4*#(1*4"<%'*#/2*4%'#27=#$%44%'-#!>%#!\E#
*23)78*#("<&')*)78#"/#*%'3%'9#.DE9#*"/4F2'%#5)(%7*%#27=#
<2)74%727(%#%;&%7*%*#"3%'#2#J#0%2'#2<"'4)624)"7#&%')"=#
2<"174%=#4"#2#F>"&&)78#T?-P:-
Frame
1 pt
75% Black
Japanese Dots (InDesign)
Corners: .125” radius
1$."(9
!"#$%#&'()*!"#$%&'#&()*'+!,)-.(/0!123!45!3%!6+(!7-#.8/
02 Body Heading 1
Univers LT 47 CondensedLt,
Bold
11.5 pt/13
01 Key Feature Head p1
Bell Gothic Std - Bold
ALL CAPS,12 pt
Centered, White
!"#$%&'()*%I,%6324"'$%!91()%8;;92".5$%63142)$3
Better VM replication and DB scaling with 75% lower
hardware acquisitions costs
!>)*#(2*%#*41=0#)73"53%*#2#&')324%#(5"1=#2&&5)27(%#
&'"3)=%'#4>24#=%5)3%'*#4>%#*2<%#$%7%/)4*#"/#@"/4F2'%#
2*#2#@%'3)(%#X@22@Y#F>)5%#255"F)78#(5"1=#(1*4"<%'*#
4"#<2)742)7#("<&5)27(%#27=#=242#*%(1')40#)7#4>%)'#
"F7#=242(%74%'*-#!>)*#3%'4)(25#*"514)"7#)7(51=%*#
)74%8'24%=#*"514)"7*#/"'#.7/'2*4'1(41'%#2*#2#@%'3)(%#
X.22@Y9#H524/"'<#2*#2#@%'3)(%#XH22@Y#27=#@"/4F2'%#2*#2#
@%'3)(%#X@22@Y9#F)4>#:%5527";#)74%'("77%(4#=%5)3%')78#
27#%//)()%749#*(252$5%#27=#>)8>#&%'/"'<)78#)74%'("77%(4#
)7#4>%#.22@#520%'-#.7#40&)(25#=%&5"0<%74*#*1(>#2*#F)4>#
2#52'8%#d2&27%*%#("785"<%'24%9#7%4F"'I#>2'=F2'%#
2(K1)*)4)"7#("*4*#F%'%#'%=1(%=#$0#1&#4"#LMN#27=#
7%4F"'I#<2728%<%74#("*4*#$0#1&#4"#GMN-#A4#4>%#
*2<%#4)<%9#:%5527";#&'"=1(4*#='2<24)(2550#'%=1(%=#
4>%#&%'/"'<27(%#)<&2(4#"/#'%25Q4)<%#3)'4125#<2(>)7%#
'%&5)(24)"7#/"'#/2)5#"3%'#&1'&"*%*9#27=#4>'"18>#1*%#
"/#17)/)%=#<%<"'0#*&2(%#2('"**#*%'3%'*9#=242$2*%#
2&&5)(24)"7*#F%'%#*(25%=#1&#%//)()%7450#4>'"18>#1*%#"/#
/%F%'#*%'3%'*-#E3%'2559#>2'=F2'%#2(K1)*)4)"7#("*4*#F%'%#
'%=1(%=#$0#ULN-
!"#$%&'()&*$+&,'-&./$0123)$4##/$0155.6&7)/$89$:;#<"
8(>&16*/.4-(9:;;7(!"##$%&'(<"04%&#&./")7(=##(*/.4-)(*")"*+",7
!"*-$-"!!&-$&-!,!%&-!,!$-!,!
"%&-!,!-!,!-!,!-+-$&'-$"&""-!&$"!!&-!-"&$-$-$%&$-&$ $%-"-!"*-!""%-&$-%--&$ $-"-!"*-!""%-&--"&$-&$ $%-$-#$"#$&+-"-&$-$%#&(-")!$%
.875”
Mellanox Technologies | July 2011
|
15
CORPORATE Style Guide
Documents and Marketing Collateral
Datasheets
See Product Briefs and
Case Study Specs
CABLES
DATASHEET
01 Main Title
Univers LT 59 UltraCondensed
32 pt/34
Tracking: -10
Aligned Left
04 Table Title
Univers LT 55
10 pt/12
Aligned Left
03 Figure Caption
Univers LT 47 CondensedLt, Italic
9 pt/10.75
80% Black
Aligned Left
40Gig Ethernet, Passive Copper Cables
Mellanox’s 40GigE Passive copper cables provide robust connection to switches and network adapters
complying with 40GBase-CR4 specifications. The 40GigE passive copper cables are compliant with the
IEEE 802.3ba and, SFF-8436 specifications and provide connectivity between devices using QSFP ports.
40GigE passive copper cables fill the need for short, cost-effective connectivity in the data center. Mellanox’s high-quality passive copper cable solutions provide a power-efficient replacement for active power
connectivity such as fiber optic cables.
02 Body Text
Univers LT 47 CondensedLt
10 pt /12.75
Tracking: -3 pt
Optimizing systems to operate with Mellanox’s 40GigE passive copper cables significantly reduces poer
consumption and EMI emission, eliminating the use of EDC hosts.
Table 1 - Cable Specifications and Ordering Information
Gauge
Max Cable
Diameter [mm]
Min Bend
Radius [mm]
Assembly
Min Ben
Radius [mm]1
Weight
[Grams/meter]2
1
Maximum
Reach [m]
30
6.2
30
61
55
MC2210130-001
Part Number
2
30
6.2
30
61
55
MC2210130-002
3
30
6.2
30
61
55
MC2210130-003
4
28
7.8
40
71
79
MC2210130-004
5
28
7.8
40
71
79
MC2210130-005
1. See figure 1 for bend radius description
2. Weight is per raw cable meter, excluding connectors
Table 2 - Absolute Maximum Ratings
Parameter
HIGHLIGHTS
– Compliant with SFF-8436
– Up to 5m 40GigE data rates
– Ultra low crosstalk for improved
performance
– Low insertion loss
– BER better than 10-15
– Serial numbers printed on each end
– Tested in an end-to-end 40GigE system
– RoHS 6 Compliant
See Product Briefs
Specs
Parameter Range
Operating Case Temperature
0°C to 70°C
Storage Ambient Temperature
-10°C to 75°C
Table 3 - Operating Conditions
Maximum
Units
Data Rate per Lane
1
10
Gb/s
Input Voltage
Parameter
Minimum
-0.3
Typical
3.6
V
Humidity (RH)
5
85
%
Table Specs
See Product Briefs
Table Rules
.5 pt, 60%
No rule on left and right outside columns
©2011 Mellanox Technologies. All rights reserved.
5.75” - 1 column format
2.25”
!"#"$%&&#
>)0'%"
="55"64;'E6FG5%"4$G6
$!5!'')*25--%0!5*++!,5%#55'!-5%)'/ !553!,5
(1L16'8%/)58A)5'%A)55)*6S\%A/1./%.+7'52%5'>)15%+5%5'>().'L'*63
:88$4$G6"2'E6FG5%"4$G6
*,5(*,!5%)"*,(.%*)5*/.5!'')*255'!5-*'/.%*)-5+'!-!5
.+*6).6%S+@5%&'(()*+,%-'./*+(+01'2%2)('2%5'>5'2'*6)617'%+5%71216]%
/66>]OOAAA3L'(()*+,3.+L^[email protected]^_G)?('2^_G)?('2%K7'571'A
03 Figure Caption
Font Family: Univers LT 47 CondensedLt
Italic
9 pt/10.8
Figure X. Bold Italic
!"#$%&'()'
Image Rule
Size: .5 pt
Color: Black
!"#$%&'*)'
!"#$%&'()&*$+&,'-&./$0123)$4##/$0155.6&7)/$89$:;#<"
See Product Brief Specs
!%G+>S510/6%"#$$3%&'(()*+,%-'./*+(+01'23%4((%510/62%5'2'57'83
!'')*25,% #!5*))!.5%,!.5)4)%'-.5)4)%,% #!5)4)%*-.5)4)%5)4)%'!5)4)%5$35%,./'5,*.**'5).!,*))!.5) 5*'.%,!5,!5,!#%-.!,! 5
., !(,&-5*"5!'')*25!$)*'*#%!-5. 5,%5%-55., !(,&5*"5!'')*25!$)*'*#%!-5. 5
''5*.$!,5., !(,&-5,!5+,*+!,.35*"5.$!%,5,!-+!.%0!5*1)!,-
16
|
Mellanox Technologies | July 2011
5!05
CORPORATE Style Guide
Documents and Marketing Collateral
Solution Briefs
See Product Briefs and
Case Study Specs
SOLUTION BRIEF
Manufacturing
Bringing New Levels of Performance to CAE Applications
The Manufacturer’s Challenge
Bringing products to market fast while meeting
quality requirements and adhering to safety standards has become a daunting challenge to manufacturers. To remain competitive, manufacturers
must deliver products as fast as possible. But if
quality suffers, customers won’t return. If safety
levels decline, significant recalls, lawsuits or
harmful publicity could ensue.
This is why manufacturing companies rely so
heavily on Computer Aided Engineering (CAE),
which helps simulate production and product
performance ahead of time. CAE allows problems
to be corrected before products reach the production stage and end up in the hands of customers.
The challenge that has emerged today is how to
run commercially-available CAE software faster
and with more accuracy. Many software vendors
offer capable products, but bottlenecks commonly
occur in the hardware that runs the simulations
and analyzes the production processes.
Without sufficient computing power, these tests
sometimes take days and weeks to run. And because the tests take longer to run, product-development teams are often forced to run fewer tests
in order to meet tight timeframes and to remain
competitive. This can lead to inaccurate testing
that often goes undetected until the products
©2011 Mellanox Technologies. All rights reserved.
2.125”
hit the assembly line—when the cost to make
changes to the design of the products grows
exponentially.
Solving this application run-time challenge will
allow manufacturers to analyze their processes
in the fastest time possible and conduct more
granular testing. This in turn will allow for quicker
and more efficient
production.
A key element to
SOLUTION
BRIEF
running tests more rapidly is choosing the right
hardware to run the CAE applications.
page 2
Today’s Solution
Large symmetric multi-processing machines
interconnect support incorporated into CAE software. Mel(SMPs) used to be the answer for generating
lanox InfiniBand interconnects eliminate I/O bottlenecks
compute power in the data center. However,
allowing applications to run faster and more efficiently.
these proprietary, expensive systems gave way to
cluster and grid architectures consisting of low- Parallel Speedup
cost commodity elements that 70
offer comparable
Linear Scaling
60
performance.
GbE
lanox
works
directlythat
with CAE software-vendors to create
Today, many manufacturers
run CAE
systems
the most high-speed
efficient, fastest, and lowest-latency solutions in
take advantage of InfiniBand-based
the industry. By combining the leading CAE software with
Mellanox InfiniBand-based solutions, manufacturing organizations can now analyze products faster and more efficiently to gain a clear competitive advantage.
As today’s price and performance leader in the industry,
Mellanox builds its solutions using standards-based InfiniBand technology. InfiniBand is an industry-standard
interconnect for high-performance computing (HPC) and
enterprise applications. The combination of high bandwidth, low latency and scalability makes InfiniBand the
interconnect-of-choice to power many of the world’s
largest and fastest computer systems and commercial
data centers. Mellanox solutions support most major
server vendors, operating systems, storage solutions and
chip manufacturers.
Bandwidth
1 Gb/sec
Latency
10 Gb
Ethernet
Myrinet
InfiniBand
10Gb/sec
2.5 Gb/sec
10, 20 & 40
Gb/sec
~10 us
2.5 - 5.5 us
<2 us
Average Efficiency
53%
No Entries
68%
74%
Price Per Gig/Port
~$350.00
>~$700.00
~$225.00
<$100.00
Table 1. Price/performance advantages for InfiniBand
Some Solution Briefs contain a Key Advantages box.
Mellanox offers complete end-to-end server interconnect
solutions for speeding CAE applications. The two major elements of the solution include:
• Fast storage connectivity
A Better
Way
Because of the ready-availability
of Ethernet,
Mellanox
InfiniBand
many of today’s clusters are
built with
Ethernetsolutions help CAE applications run
offers high-performance (10, 20 and 40
as the interconnect. Whilefaster.
GigabitMellanox
Ethernet-based
low-latency
(< 2 microseconds) interconnect soluclustering is cheaper than Gbps),
SMP-based
architections For
for CAE
applications. Benchmark testing has found
tures, it can be very inefficient.
applications
that Mellanox
interconnect
solutions reduce CAE-runtime
that rely on bandwidth or memory
sharing,
the efby
as
much
as
50
percent.
ficiency (percentage of a server-CPU dedicated to
communications overhead)Incan
be a concern.
addition
to offering InfiniBand switch-technology, Mel-
1 Gb
Ethernet
Building CAE Clusters
• High-speed, low latency InfiniBand switches
InfiniBand
50 clusters
Some companies try to scale CAE
46.59
75% Efficiency
by adding more servers or moving
40 to servers
33.00
with multiple cores. This approach
can work for
30
51% Efficiency
smaller, simpler simulations, but the more com21
plex the analysis, the more likely the need to run
10
simulations across multiple servers where latency
is a major factor in determining performance.
0
0
10
20
30
40
50
60
70
The answer to speeding analysis and maximizing
# Cores
return on CAE investments is not simply buying
more or bigger servers, but
rather
eliminating
Figure
1. Mellanox
InfiniBand-based solution improves performance by
bottlenecks to performance
by employing the use
50%
of a high performance interconnect.
Parallel Speedup
Solution Partner
Sidebar Space
Sidebar can be used for
logos or product images
Computer Aided Engineering (CAE) is used to
help manufacturers bring products to market
faster while maintaining a high-level of quality. The
faster companies can conduct tests and perform
product analysis, the bigger the benefits of using
CAE. Advances in software and server hardware
have set the stage for faster results, but manufacturers should not overlook a major performancerobbing bottleneck: the server interconnect. To
gain dramatic improvements in CAE performance,
manufacturing firms are turning to Mellanox’s
InfiniBand-based solutions to speed the movement of data between clustered servers.
High-performance InfiniBand Switches
Mellanox’s InfiniBand-based solutions deliver high performance and scalability to compute clusters. Mellanox offers
a complete portfolio of products including a scalable line of
InfiniBand switches, high performance I/O gateways (for
seamless connectivity to Ethernet and Fibre Channel networks) and fabric management software. Mellanox solutions use the Open Fabric Alliance’s OFED drivers and the
OpenMPI (Message Passing Interface) libraries to optimize
application performance for both MPI-based and socketbased applications.
For small-to-medium sized clusters, Mellanox offers the
Mellanox Grid Director™ 9024. It is a 1U device with
twenty-four 10 Gbps (SDR) or 20 Gbps (DDR) InfiniBand
ports. The switch is a high performance, low latency, fully
non-blocking edge or leaf-switch with a throughput of 480
Gbps.
Figure 2. Mellanox Grid Director 9024 for small-to-medium sized
clusters ranging from 16 to 24 nodes.
It is well-suited for small InfiniBand fabrics with up to 24
nodes because it includes all of the necessary management capabilities to function as a stand-alone switch. The
Grid Director 9024 is internally managed and offers comprehensive device and fabric management capabilities.
Designed for high-availability (high MBTF) and easy maintenance, the switch is simple to install and features straightforward initialization. The solution is scalable as additional
switches can be added to support additional nodes.
For larger clusters ranging from 25-96 compute nodes,
Mellanox offers the Grid Director™ 2004 multi-service
switch — the industry’s highest performing multi-service
switch for medium-to-large clusters and grids. The switch
enables high performance non-blocking configurations and
features an enterprise-level, high availability design. The
Grid Director 2004 supports up to 96 InfiniBand 4X ports
(20 Gbps) and is scalable through the use of additional, hotswappable modules. The Grid Director 2004 also features
10 GbE and Fibre Channel ports so the solution can provide
high-performance, integrated SAN and LAN connectivity.
.875”
1.125”
This element can be found in the InDesign library file.
01 Key Features p1
Bell Gothic Std, Bold
10pt
CAPS
White
Aligned Left
01 Key Features/Apps
Univers LT 47 CondensedLT
9 pt/11
Aligned Left
KEY ADVANTAGES
–– The world’s fastest interconnect, supporting
up to 40Gb/s per adapter
–– Latency as low as 1 microsecond
–– Full CPU offload with the flexibility of RDMA
capabilities to reduce traditional network
protocol processing from the CPU and increase
the processor efficiency.
–– I/O Capex reduction – one 40Gb/s Mellanox
adapter carries more traffic with higher
reliability than four 10 Gigabit Ethernet
adapters.
Header
.125” top corner radius
Inset .625 pt
10% Black Box
Inset .08 pt
Mellanox Technologies | July 2011
|
17
CORPORATE Style Guide
Documents and Marketing Collateral
Reference Guides
3.375”
REFERENCE GUIDE
ConnectX®-2 Adapter Cards
Body Text
Univers LT 47 CondensedLt
8.5 pt/1 0 pt
Tracking: -5 pt
Why Mellanox?
ConnectX-2 adapter cards with Virtual Protocol Interconnect (VPI) provide the highest performing and
most flexible interconnect solution for Enterprise Data Centers, High-Performance Computing, and
Embedded environments. ConnectX-2 with VPI also simplifies network deployment by consolidating
cables and enhancing performance in virtualized server environments. In addition, ConnectX-2 supports
CORE-Direct application acceleration, Virtual-IQ technology for vNIC and vHBA support, and congestion
management to maximize the network efficiency making it ideal for HPC or converged data centers
operating a wide range of applications.
Key Features
– 1us MPI latency
– 10, 20, or 40Gb/s InfiniBand ports
– PCI Express 2.0 (up to 5GT/s)
– CPU offload of transport operations
– End-to-End QoS & congestion control
– Hardware-based I/O virtualization
– TCP/UDP/IP stateless offload
Key Advantages
– High-performance networking and storage
access
– Guaranteed bandwidth & low-latency
services
– Reliable transport
– End-to-End storage integrity
– I/O consolidation
– Virtualization acceleration
– Scales to tens-of-thousands of nodes
InfiniBand Switches
Mellanox 20 and 40Gb/s InfiniBand switches deliver the highest performance
and density with a complete fabric management solution to enable compute
clusters and converged data centers to operate at any scale while reducing
operational costs and infrastructure complexity. IS5000’s scalable switch
building blocks from 36 to 648 ports in a single enclosure gives IT managers the
flexibility to build networks up to 10’s of thousands of nodes.
Key Features
– 51.8Tb/s switching capacity
– 100ns to 300ns switching latency
– Hardware-based routing
– Congestion control
– Quality of Service enforcement
– Up to 6 separate subnets
– Temperature sensors and voltage
monitors
Key Advantages
– High-performance fabric for
parallel computation or I/O
convergence
– Wirespeed InfiniBand switch
platform up to 40Gb/s per port
– High-bandwidth, low-latency
fabric for compute-intensive
applications
ConnectX EN Ethernet Network Interface Cards (NIC)
Delivers the industry’s lowest 1.3 us latency performance with performance-leading 10 and 40 Gigabit
Ethernet connectivity with stateless offloads for converged fabrics in Enterprise Data Centers, HighPerformance Computing, and Embedded environments. Clustered databases, web infrastructure, and IP
video servers are just a few example applications that will achieve significant throughput and latency
improvements resulting in faster access, real-time response and increased number of users per server.
ConnectX-2 EN improves network performance by increasing available bandwidth to the CPU and
providing enhanced performance, especially in virtualized server environments.
Key Features
– Industry-leading throughput and latency performance
– Enabling I/O consolidation by supporting TCP/IP, FC over Ethernet and RDMA over Ethernet
transport protocols on a single adapter
– Improved productivity and efficiency by delivering VM scaling and superior server utilization
– Supports industry-standard SR-IO Virtualization technology and delivers VM pro tection and
granular levels of I/O services to applications
– High-availability and high-performance for data center networking
– Software compatible with standard TCP/UDP/IP and iSCSI stacks
– High level silicon integration and no external memory design provides low power, low cost and
high reliability
– High-bandwidth, low-latency fabric for compute-intensive applications
–
–
–
–
–
–
–
–
–
Leading I/O consolidation for servers and storage!
40Gb/s end-to-end solutions (HCAs, switches, cables)
Proven scalability and reliability
Field proven – 1000‘s of clusters in production, over 5M ports
deployed!
Advanced features for highest application productivity
High performance computing and enterprise and cloud data
centers, virtualized and unified systems
40Gb/s InfiniBand and 10 Gigabit Ethernet in the same device!
World-class Support
Best-in-class production testing and product quality
Why 40Gb/s InfiniBand?
Enables the highest performance and lowest latency
– Proven scalability for 10s of thousands of nodes
– Maximum return on investment
Highest Efficiency / Maintains balanced system ensuring
highest productivity
– No artificial bottlenecks, performance match for PCIe Gen2
– Proven to fulfill multi-process networking requirements
– Guaranteeing no performance degradation
Performance driven architecture
– MPI latency 1us, 6.6GB/s with 40Gb/s InfiniBand (bi-directional)
– MPI message rate of >40 Million/sec
Superior application performance
– From 30% to over 100% HPC applications performance increase
– Doubles the storage throughput, reducing backup time in half
Heading
Univers LT 55, Bold
11.5 pt
PMS 274 C
InfiniBand Market Applications
InfiniBand is increasingly becomes an interconnect of choice in not
just high performance computing environments, but also in mainstream enterprise grids, data center virtualization solutions, storage,
and embedded environments. The low latency and high performance
of InfiniBand coupled with the economic benefits of its consolidation and virtualization capabilities provides end-customers the ideal
combination as they build out their applications.
Data Center
Today’s data centers need an agile infrastructure that incorporates
ongoing improvements in computer, storage, networking, and
application technologies, and empowers IT to support changing
business processes. InfiniBand fabric solutions enable IT organizations to turn computing and storage resources from monolithic
systems to service-centric shared pool of resources consisting of
standardized components that can be dynamically provisioned and
accessed through an intelligent network.
High Performance Computing
With InfiniBand’s proven scalability and efficiency, small and
large clusters easily scale up to thousands of nodes. With 40Gb/s
node-to-node and 120Gb/s switch-to-switch solutions available,
and a roadmap to 240Gb/s, InfiniBand has proven its performance
in personal supercomputing, workgroup, departmental, divisional,
and enterprise supercomputers solutions.
Storage
InfiniBand’s high bandwidth, low latency, dedicated I/O channels,
QoS and RDMA features can lower capital expenses and operating
costs making it the right choice for storage.
REFERENCE GUIDE
Table Text
Univers LT 47 CondensedLt
8.15 pt/9 pt
Tracking: -3 pt
4(1"'8601$+08/-#2"10
$** ,-58 /18
MFG /#$/ !*$8
-/18 , &$#8884(1"'82,#*$
I%,.5I8D84&.,60®I)I%I8D84,8/I&@4=.3II$&#I:9;=<I
I#9@0;I&>::6BI
,8,20/I##*%I#&(I<4/0I=9I.9880.=9;I<4/0I,4;E9@I&=,8/,;/I/0:=3I%,46I
Kit. Includes Chassis Management
MIS5030Q-­1SFC
+
-/18 , &$#8884(1"'82,#*$88(58 ")
8D84&.,60®I)I$%I8D84,8/I&@4=.3II$&#I:9;=<I
I#9@0;I&>::6BI ,8,20/I
##*%I9880.=9;I<4/0I=9I#&(I<4/0I,4;E9@I&39;=I/0:=3I%,46I4=I8.6>/0<I
Chassis Management
&$
%
+
-/18884(1"'82,#*$
8D84&.,60®I)I'-<I$%I8D84,8/I3,<<4<I&@4=.3I<>::9;=482I>:I=9I
I:9;=<I
8.6>/0<II&:480I6,/0<I ,8,20708=I 9/>60I##*II:9@0;I<>::640<I
and installation rails. Includes Chassis Management; Upgradeable to FabricIT 8=0;:;4<0I,-;4.I ,8,20;I@4=3I:>;.3,<0I91I,-;4.' I4.08<0
MIS5100Q-­3DNC
+
-/18884(1"'82,#*$
8D84&.,60®I)I
'-<I$%I8D84,8/I3,<<4<I&@4=.3I<>::9;=482I>:I=9I
I
:9;=<I8.6>/0<II&:480I6,/0<I ,8,20708=I 9/>60I##*II:9@0;I<>:-­
:640<I,8/I48<=,6,=498I;,46<I(:2;,/0,-60I=9I,-;4.'I8=0;:;4<0I,-;4.I ,8,20;I@4=3I
:>;.3,<0I91I,-;4.' I4.08<0
&$!
+
REFERENCE GUIDE
-/18884(1"'82,#*$
8D84&.,60®I)I'-<I$%I8D84,8/I3,<<4<I&@4=.3I<>::9;=482I>:I=9II
:9;=<I8.6>/0<II&:480I6,/0<I ,8,20708=I 9/>60I##*II:9@0;I
supplies, and installation rails. Includes Chassis Management; Upgradeable to ,-;4.'I8=0;:;4<0I,-;4.I ,8,20;I@4=3I:>;.3,<0I91I,-;4.' I4.08<0
MIS5300Q-­6DNC
-/18884(1"'82,#*$
8D84&.,60®I)I
'-<I$%I8D84,8/I3,<<4<I&@4=.3I<>::9;=482I>:I=9II:9;=<I
8.6>/0<I
I&:480I6,/0<I ,8,20708=I 9/>60I##*I
I:9@0;I<>::640<I
and installation rails. Includes Chassis Management; Upgradeable to FabricIT 8=0;:;4<0I,-;4.I ,8,20;I@4=3I:>;.3,<0I91I,-;4.' I4.08<0
MIS5600Q-­10DNC
'"'" "'" ' !
+
!'&'%"!
$#2,# ,1.1(-, *8 , &$+$,18-#2*$
PPC460 Management module for the MIS5xxx Series Chassis Switch, RoHS R5
+
MIS5600MDC
+
MIS5001QC
+
$ %8* #$8%-/81'$85558$/($08' 00(084(1"'
8D84&.,60®I)I
I:9;=I$&#I-<I8D84,8/II
SKU #
N%(*NA834NB=N2=<<42B=@NA834N08@J=EN(B0<30@3N34>B7N'08:N 8BNNN&(%NN%=E4@N(C>>:G
<2:C34AN70AA8AN"0<064;4<BN*>6@03401:4NB=N01@82)N<B4@>@8A4N01@82N"0<064@N
E8B7N>C@270A4N=5N01@82)"N!824<A4
(
N<I<8(20:4LN+N&'N<I<80<3N(E8B27N*<;0<0643N
(
N<I<8(20:4N+N&'N<I<80<3N(E8B27N"0<0643N%%
-'
(
N<I<8(20:4N+N&'N<I<80<3N(E8B27N"0<0643N%%
-
(
N<I<8(20:4N+N)1AN&'N70AA8AN(E8B27NAC>>BNC>NB=N
N
ports**
(
N<I<8(20:4N+N)1AN&'N70AA8AN(E8B27NAC>>BNC>NB=NN
ports**
(
N<I<8(20:4N+N)1AN&'N70AA8AN(E8B27NAC>>BNC>NB=NN
ports**
<I<8(20:4N+NN>=@BN&(%N
1AN<I<80<3N!405N:034N5=@NB74N
MIS5xxx Series Chassis Switch
")(
N<I<8(20:4N+N)1AN&'N<I<80<3N70AA8AN(E8B27NN:405N
A:=BANN(>8<4N:034AN"0<0643N%%
-NN>=E4@NAC>>:84AN'=(N'N
<AB0::N@08:ANN@4?N:0<9AN8<2:C343N
<I<8(20:4N+NN>=@BN&(%N
1AN<I<80<3N!405N:034N5=@NB74N
MTS3610 Series Chassis Switch, RoHS R5, Spare
"
&N
1AN<I<80<3N70AA8AN(E8B27N5=@N"
4N502B=@GN8<AB0::43
"
&N
1AN<I<80<3N70AA8AN(E8B27N5=@N"
4NCABN 8B
IS5100, IS5200, IS5300, and
IS5600 108, 216, 324 and 648-port
40Gb/s InfiniBand Modular Switch Systems
!'&'" !
,7, ,#8-018' ,,$*8# .1$/08
$** ,-58 /18
Single Port QDR ConnectX®I)#I,/,:=0;I.,;/I$&#II-<I,8/I
42I
#0IAI'<I=,66I-;,.50=I%9&I%
$
*'%
Dual-­port QDR ConnectX®I)#I,/,:=0;I.,;/I$&#II-<I,8/I
42I#0I
AI'<I=,66I-;,.50=I%9&I%
$*'%
6,/008=0;II 0CC,8480I/,:=0;I,;/III,8/II-<I8D84,8/I>,6I#9;=I
#0I,<0II.97:64,8=I
I.97:,=4-60FI'<I9;I'<I6485I;,=0IA
1'$/,$18# .1$/08
Custom
+
46M6001
$** ,-58 /18
MFG /#$/ !*$8
>,6#9;=I
42I/,:=0;I.,;/I9880.=*®I!I80=@9;5I48=0;1,.0I.,;/I/>,6:9;=I
&#I#0IAI'<I=,66I-;,.50=I%9&I%
!#*'%
+
-%14 /$8("$,0$
$** ,-58 /18
MFG /#$/ !*$8
*$08 , &$/8-,1 "1
Contact Information
3>.5I'B->;
Director of Sales
'06I
7,46I.3>.57066,89A.97
Mellanox Technologies | July 2011
"&-)'N=<<42B-LN+%N030>B4@N20@3NA8<6:4>=@BN&(%
20(,$008$3$*-.+$,18-,1 "1
Steve Williams
" I><480<<I0?069:708=I 82;
066I
7,46I<=0?07066,89A.97I
* See Contacts below
($*#8 /)$1(,&8-,1 "1
Darrin Chen
&;I4;0.=9;I" II3,8806I406/I 5=2
066I
7,46I/,;;487066,89A.97
A3000900
SKU #
''" !' '"
SKU #
All include subnet manager and administrator, cluster diagnostics and performance manager. Also includes 1-­year software maintenance agreement for minor feature C>30B4AN0<3N1C6NIF4A
FabricIT License (up to 36 ports)
FabricIT Annual Maintenance (up to 36 ports)
01@82)N!824<A4NC>NB=NN>=@BA
01@82)N<<C0:N"08<B4<0<24NC>NB=NN>=@BA
01@82)N!824<A4NC>NB=NN>=@BA
01@82N)N<<C0:N"08<B4<0<24NC>NB=NN>=@BA
01@82)N!824<A4NC>NB=N
N>=@BA
01@82)N<<C0:N"08<B4<0<24NC>NB=N
N>=@BA
FabricIT License (up to 144 ports)
FabricIT Annual Maintenance (up to 144 ports)
01@82)N!824<A4NC>NB=NN<=34A
01@82)N<<C0:N"08<B4<0<24NC>NB=NN>=@BA
SKU #
01@82)N!824<A4NC>NB=NN>=@BA
01@82)N<<C0:N"08<B4<0<24NC>NB=NN>=@BA
01@82)N!824<A4NC>NB=NN>=@BA
01@82)N<<C0:N"08<B4<0<24NC>NB=NN>=@BA
"&-)'N=<<42B-N+%N030>B4@N20@3N3C0:>=@BN&(%
01@82)N!824<A4NC>NB=NN>=@BA
"/-)'N=<<42B-N+%N030>B4@N20@3N
1ANN&(%N0<3N
86N
(%N%4
NFN
)ANB0::N1@0294BN'=(N'
N.40@NFB4<343N,N,0@@0<BGNN30>B4@A
1AN<I<80<3N"4HHN20@3N5=@N"
4N502B=@GN8<AB0::43N
1AN<I<80<3N"4HHN0@3N5=@N"
4NCABN 8BN
01@82)N<<C0:N"08<B4<0<24NC>NB=NN>=@BA
N
430-­3431
01@82)N<<C0:N"08<B4<0<24NC>NB=NN>=@BA
!'&'%"!
SKU #
NNN
N%=@BN<I<8(20:4N+N&(%N
1ANN(E8B27
FBN,0@@0<BGN(E8B27NNN.40@
1AN<I<80<3N70AA8AN(E8B27N502B=@GN8<AB0::43
1AN<I<80<3N70AA8AN(E8B27NCABN 8B
ConnectX-2 VPI Dual Port QSFP
InfiniBand and SFP+ 10GigE Adapter
SKU #
N
1AN0<3N
86N%4
NFN
)ANB0::N1@0294BN'=(N'
=<<42B-LN+%N030>B4@N20@3NA8<6:4>=@BN&(%N
=<<42B-N+%N030>B4@N20@3N3C0:>=@BN&(%N
=<<42B-N+%N030>B4@N20@3N3C0:>=@BN-N
N.40@NFB4<343N,N,0@@0<BGNN30>B4@N0@3AN
1AN<I<80<3N"4HHN20@3N502B=@GN8<AB0::43
1AN<I<80<3"4HHN0@3NCABN 8B
!' '""
,-;4.'
I",570,/I#,;5@,BI&>4=0I
&>88B?,60II
'06IIGI,AI
www.mellanox.com
|
+
N
1AN0<3N
86N%4
NFN
)ANB0::N1@0294BN'=(N'
!'&'" !
Mellanox Fabric IT Mgr Per Node RTU
18
MFG /#$/ !*$8
%4
NFN
)ANB0::N1@0294BN'=(N'
=<<42B-LN#N<4BE=@9N8<B4@5024N20@3N3C0:>=@BN
(-
=<<42B-N#N<4BE=@9N8<B4@5024N20@3N3C0:>=@BN(%N
=<<42B-N#N
(-N%4
N)ANN>=@BA
=<<42B-N#N
86N(%N%4
NFN)ANN>=@BA
"4::0<=FN(%N=>B820:N;=3C:4N5=@N
(!'
"4::0<=FN(%N=>B820:N;=3C:4N5=@N
(('
7C29N)G1C@
Director of Sales
)4:N
;08:N27C29;4::0<=F2=;
#!!!'
$"'""
Steve Williams
$"NCA8<4AAN4D4:=>;4<BN"<6@
4::N
;08:NAB4D4;4::0<=F2=;N
N$09;403N%0@9E0GN(C8B4N
(C<<GD0:4NN
)4:N
NKN0FN
www.mellanox.com
ConnectX-2 EN Adapter Card with SFP+
2..-/1$"',(" *8-,1 "1
Glenn Church
406/I::64.,=498<I82
066I
7,46I260887066,89A.97
HI9:B;423=I
I 066,89AI'0.38969240<I66I;423=<I;0<0;?0/
066,89AI;4/20*I9880.=*I"%4;0.=I8D846,<=I8D84;4/20I8D849<=I8D84%&I8D84&.,60I8D84#I#3B*I
)4;=>,6I#;9=9.96I8=0;.9880.=I,8/I)96=,4;0I,;0I;024<=0;0/I=;,/07,;5<I91I 066,89AI'0.38969240<I=/I,-;4.'I4<I,I=;,/07,;5I
91I 066,89AI'0.38969240<I=/I66I9=30;I=;,/07,;5<I,;0I:;9:0;=BI91I=304;I;0<:0.=4?0I9@80;<
01@82)N!824<A4NC>NB=NN>=@BA
01@82)N!824<A4NC>NB=NN>=@BA
01@82)N<<C0:N"08<B4<0<24NC>NB=NN>=@BA
Cables
SKU #
Cables
SKU #
1M Passive Copper
"N%0AA8D4N=>>4@
3M Passive Copper
4M Passive Copper
5M Passive Copper
6M Passive Copper
"N%0AA8D4N=>>4@
"N2B8D4N=>>4@
10M Active Copper
"N2B8D4N=>>4@
10M Fiber
15M Fiber
"N814@
30M Fiber
' "'""
Darrin Chen
(@N8@42B=@N$"NN70<<4:N84:3N"9B6
4::N
;08:N30@@8<;4::0<=F2=;
# "'""
Glenn Church
84:3N>>:820B8=<AN<6
4::N
;08:N6:4<<;4::0<=F2=;
MN=>G@867BN
N"4::0<=FN)427<=:=684AN::N@867BAN@4A4@D43
"4::0<=FN@8364-N=<<42B-N$'8@42BN<I<8:0ABN<I<8@8364N<I<8=ABN<I<8'(N<I<8(20:4N<I<8%N%7G-N
+8@BC0:N%@=B=2=:N<B4@2=<<42BN0<3N+=:B08@4N0@4N@468AB4@43NB@034;0@9AN=5N"4::0<=FN)427<=:=684AN!B3N01@82)N8AN0NB@034;0@9N
=5N"4::0<=FN)427<=:=684AN!B3N::N=B74@NB@034;0@9AN0@4N>@=>4@BGN=5NB748@N@4A>42B8D4N=E<4@A
Tables
Tables can be arranged
in one large column or
two columns with an
even width as shown.
CORPORATE Style Guide
Documents and Marketing Collateral
Technology Briefs
TECHNOLOGY BRIEF
April 2010
ConnectX®-2 with RoCE
Main Title
Univers LT 59 UltraCondensed
26 pt / 29
(ConnectX-2 VPI and ConnectX-2 EN)
Body Heading
Univers LT 47 CondensedLt, Bold
12 pt/13 pt
PMS 274 75%
Set as Inline Heading anchored to
beginning of paragraph
1.0 Opportunities with
Evolution of Ethernet
The two commonly known RDMA (remote DMA) technologies are InfiniBand and iWARP (Internet
Wide Area RDMA Protocol). InfiniBand has enjoyed significant success to date in HPC applications.
iWARP solutions over Ethernet have seen limited success because of implementation and deployment
challenges. Recent enhancements to the Ethernet data link layer under the umbrella of IEEE data center
Bridging (DCB) open significant opportunities to proliferate the use of RDMA technology into mainstream
data center applications by taking a fresh and yet evolutionary look at how such services can be more
easily and efficiently delivered over Ethernet. The proposed DCB standards include: IEEE 802.1bb –
Priority-based flow control, 802.1Qau – Congestion Notification, and 802.1az – Enhanced Transmission
Selection (ETS) and DCB Capability Exchange. The lossless delivery features in DCB, enabled by Prioritybased Flow Control (PFC), are analogous to those in the InfiniBand data link layer. As such, the natural
choice for building RDMA services over PFC-based DCB Ethernet is to apply use InfiniBand-based native
RDMA transport services. The IBTA (InfiniBand Trade Association) has recently released a specification
called RDMA over Converged Ethernet (RoCE, pronounced as “Rocky”) that applies the InfiniBand-based
native RDMA transport services over Ethernet. ConnectX-2 with RoCE (RDMA over Ethernet) implements
the RoCE standard to deliver InfiniBand-like ultra low latency and high scalability over Ethernet fabrics.
1.1 How ConnectX-2
RoCE Works
ConnextX-2 with RoCE is born out of combining InfiniBand native RDMA transport with Ethernet per the
IBTA RoCE specification. The data link InfiniBand-based layer 2 is replaced by Ethernet layer 2, as shown
in the figure below. The InfiniBand transport is applied over a PFC-based loss less Ethernet data link.
LRH
(L2 Hdr)
GRH
IB Transport
Headers
IB Payload
ICRC
VCRC
Main Title 2
Univers LT 47 CondensedLt
15 pt /15
Application
OFA verbs
IB Software transport
interface
InfiniBand
IB-based transport
Eth L2
Header
GRH
IB Transport
Headers
IB Payload
ICRC
FCS
Network
RoCE
Figure: Low Latency Ethernet packet
format and protocol stack
Ethernet w/PFC (Data Link)
Software Interface: ConnextX-2 with RoCE is compliant with the Open Fabrics Alliance OFED
verbs definition and is interoperable with the OFA software stacks (similar to InfiniBand and iWARP).
ConnextX-2 with RoCE uses the proven and feature rich InfiniBand verbs interface available in the OFA
stacks. OFED v1.5.1 supports RoCE and ConnextX-2 with RoCE.
Transport Layer: ConnextX-2 with RoCE uses the InfiniBand transport layer, as defined in the IBTA RoCE
specification. The adaptation from InfiniBand data link to Ethernet data link is straight forward because
the InfiniBand transport layer was designed ground up to be data link layer agnostic. The InfiniBand
transport layer expects certain services from the data link layer related to lossless delivery of packets, and
these are delivered by a PFC enabled Ethernet data link layer. ConnextX-2 with RoCE inherits a rich set
of transport services beyond those required to support OFA verbs including connected and unconnected
modes, reliable and unreliable services. Built on top of these services is a full set of verbs-defined
operations including kernel bypass, send/receive, RDMA read/write, and atomic operations.
©2011 Mellanox Technologies. All rights reserved.
2.625”
.5”
.75”
. &$;
TECHNOLOGY BRIEF: ConnectX®-­2 with RoCE
Management Feature Required by
IB Transport Layer & Apps using
IB Transport Layer
How InfiniBand delivers them in
the InfiniBand subnet
How Ethernet (and CEE) delivers
them using standard Ethernet
management practices
L2 address assignment
Subnet Manager L2 address
assignment
Fixed assigned L2 address or other
Ethernet mechanisms
L2 topology discovery and switch FDB
configuration
Subnet Manager topology discovery
using direct routed subnet management packets (SMP). Subnet Manager
path computation and path distribution
Spanning Tree and Learning mechanisms. Also, IEEE Transparnet Interconnection of Lots of Links (TRILL) when
available and other eth practices
Address Resolution
SA based path resolution
Address Resolution Protocol (ARP) or
direct mapping
QoS
QoS Manager extension to Subnet
Manager
Standard Ethernet QoS management
practices. Local API to access fabric
policy settings
Congestion management
Congestion Manager for IB
802.1Qau congestion management
features
Performance management
IB Performance Manager
SNMP/RMON MIBS
Device/baseboard management
IB Baseboard Manager
SNMP/RMON MIBS
-,,$62
;5(2';-; # .2$01;! 1$#;-,;2'$;;-;1.$"(9" 2(-,; 0$; 4 (* !*$;2-# 7;%0-+;$** ,-6;
$"',-*-&($1; ,#;' 4$;!$$,;#$+-,120 2$#;2-;#$*(4$0;$,#;2-;$,#; ..*(" 2(-,;*$4$*;* 2$,"($1;-%; 1;*-5; 1;
;+("0-1$"-,#1;$** ,-6; ,#;-2'$0;(,#31207;*$ #$01; 0$;"-** !-0 2(,&;-,;&0-5(,&;2'$;$"-1712$+;-%;
-! 1$#; # .2$01; ,#;(,#$.$,#$,2;1-%25 0$;4$,#-0; ..*(" 2(-,1;2' 2;" .(2 *(8$;-,;2'$;!$,$921;-%;
-,,$62
;5(2';-;-+$;$6 +.*$1;-%;2 0&$2; ..*(" 2(-,1; 0$;9, ,"( *;1$04("$1;!31(,$11;(,2$**(&$,"$;
# 2 ;5 0$'-31(,&;"*-3#;"-+.32(,&; ,#;$!;
1.2 ConnectX-­2 with RoCE Advantages
1$#;-,;2'$;#(1"311(-,; !-4$;(2;(1;-!4(-31;2' 2;-,,$62
;5(2';-;"-+$1;5(2';+ ,7; #4 ,2 &$1;
,#;'-*#1;2'$;.0-+(1$;2-;$, !*$;5(#$1.0$ #;#$.*-7+$,2;-%;;2$"',-*-&($1;(,;+ (,120$ +;# 2 ;
"$,2$0; ..*(" 2(-,1;;
1. -,,$62
;5(2';-;32(*(8$1; #4 ,"$1;(,;2'$0,$2;;2-;$, !*$;$%9"($,2; ,#;*-5;"-12;(+.*$
+$,2 2(-,1;-%;;-4$0;2'$0,$2;;
2. -,,$62
;;20 %9";" ,;!$;"* 11(9$#; 2;2'$;# 2 ;*(,);* 7$0;5'("';(1;% 12$0; ,#;0$/3(0$1;*$11;
;-4$0'$ #
3. -,,$62
;5(2';-;#$*(4$01;31$"; ..*(" 2(-,;2-; ..*(" 2(-,;* 2$,"7;5'("';(1;2';-%;-2'$0;
(,#31207;12 ,# 0#;(+.*$+$,2 2(-,1;-4$0;2'$0,$2;$,"'+ 0)(,&;5(2';.-.3* 0;9, ,"( *;1$04("$1; .
.*(" 2(-,1;1'-5;+-0$;2' ,;;*-5$0;* 2$,"7; ..*(" !*$;2-;" .(2 *;+ 0)$2;# 2 ;.0-"$11(,&; ,#;20 #$;
$6$"32(-,1
4. -,,$62
;5(2';-;13..-021;2'$;$,2(0$;!0$ 2';-%;; ,#;*-5;* 2$,"7;%$ 230$1;'(1;(,"*3#$1;
0$*( !*$;"-,,$"2$#;1$04("$;# 2 &0 +;1$04("$;; ,#;1$,#0$"$(4$;1$+ ,2("1; 2-+(";-.$0 2(-,1;
31$0;*$4$*;+3*2(" 12;31$0;*$4$*;; ""$11;)$0,$*;!7. 11; ,#;8$0-;"-.7;;
5. '$;;4$0!1;31$#;!7;-,,$62
;5(2';-; 0$;! 1$#;-,;,9,( ,#; ,#;' 4$;!$$,;.0-4$,;(,;
* 0&$;1" *$;#$.*-7+$,21; ,#;5(2';+3*2(.*$;; ..*(" 2(-,1;!-2';(,;2'$;; ,#;;1$"2-01;3"';
..*(" 2(-,1;" ,;,-5;!$;1$ +*$11*7;-%%$0$#;-4$0;-,,$62
;5(2';-;5(2'-32; ,7;.-02(,&;$%%-02;
0$/3(0$#
6. -,,$62
;5(2';-;! 1$#;,$25-0);+ , &$+$,2;(1;2'$;1 +$; 1;2' 2;%-0; ,7;2'$0,$2; ,#;
! 1$#;,$25-0);+ , &$+$,2;$*(+(, 2(,&;2'$;,$$#;%-0;;+ , &$01;2-;*$ 0,;,$5;2$"',-*-&($1
350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085
:;-.70(&'2;
;$** ,-6;$"',-*-&($1;**;0(&'21;0$1$04$#
$** ,-6;0(#&$;-,,$"2;(0$"2;,9,(* 12;,9,(0(#&$;,9,(-12;,9,(;,9,(" *$;,9,(;'7;(023 *;0-2-"-*;,2$0"-,,$"2; ,#;-*2 (0$; 0$;0$&(12$0$#;
20 #$+ 0)1;-%;$** ,-6;$"',-*-&($1;2#; !0(";(1; ;20 #$+ 0);-%;$** ,-6;$"',-*-&($1;2#;**;-2'$0;20 #$+ 0)1; 0$;.0-.$027;-%;2'$(0;0$1.$"2(4$;-5,$01
19
|
Mellanox Technologies | July 2011
CORPORATE Style Guide
Documents and Marketing Collateral
White PaperS
WHITE PAPER
2010
A Best of Breed Low Latency Solution for Trading
Environments from 29West & Mellanox
29West Latency Busters® Messaging (LBM) and Messaging
Accelerator™ (VMA) Over Mellanox 40Gb/s (QDR) InfiniBand Fabric
Executive Summary ......................................................................................................................1
Reference Transport Latency .......................................................................................................2
01 TOC Listing
Univers LT 47 CondensedLt
10 pt/13
Generated automatically using
01 Body Heading1 tags.
Latency Versus Message Size .....................................................................................................3
Conclusion ....................................................................................................................................5
Methodology of Measurement ....................................................................................................5
Benchmark Set Up Details ...........................................................................................................6
About 29West Inc,. ......................................................................................................................8
Executive Summary
The security trading market is experiencing rapid growth in volume and complexity with a greater reliance
on trading software, which is supported by sophisticated algorithms. As this market grows, so do the
trading volumes, bringing existing IT infrustracture systems to their limits.
Middleware and networking solutions must not only deliver lower latency despite higher volumes,
but also sustain performance through market data traffic spikes while minimizing latency jitter and
unpredictable messaging outliers.
To address this challenge, two market leaders have come together to provide a field-proven
infrastructure solution that combines best of breed enterprise messaging middleware and trading
transport infrustructure. The solution slashes end-to-end trading latency by analyzing each layer of
the infrastructure—server, storage, server stack and middleware, as well as switching and bridging
devices—and optimizing each part for minimum latency and maximum throughput, while keeping an eye
on overall cost.
29West has emerged as the standard in next generation messaging software and delivers an unbending
commitment to offering the lowest possible latency across the widest possible range of messaging
use cases. Similarly, Mellanox has evolved into the market leader in IT scale-out fabric infrastructure,
excelling in low latency software and switching solutions.
This report demonstrates the benefits of a 29West middleware solution integrated with Mellanox’s lowlatency solution for high-frequency trading customers. Specifically, this paper describes a benchmark of
the integrated solution that delivers application messaging at:
Latency under medium load
Message rates under 125K msg/sec, message size under 256 bytes
• Average latency under 9 µsec
• 99.9th percentile latency 45 µsec or lower
Latency under heavy load (high throughput)
Message rate up to 1,300,000 msg/sec, message size under 64 bytes
• Average latency under 18 µsec
• 99.9th percentile latency 70 µsec or lower
©2011 Mellanox Technologies. All rights reserved.
20
|
Mellanox Technologies | July 2011
CORPORATE Style Guide
Documents and Marketing Collateral
PRODUCT BROCHURES
Mellanox Product Brochures are 10” x 6” horizontal format. They can be set up in a 2- or 3-panel configuration
and should follow the color designations described on page 12.
2-page configuration: 4, 8, 12, or 16 pages
Saddle Stitched
Feature Summary
HARDWARE
– 1/10Gb/s or 40Gb/s per port
– Full bisectional bandwidth to all ports
– 1.21 compliant
– All port connectors supporting passive and
active cables
– Redundant auto-sensing 110/220VAC
power supplies
– Per port status LED Link, Activity
– System, Fans and PS status LEDs
– Hot-swappable replaceable fan trays
COmPLIaNCe
MANAGEMENT
– Comprehensive fabric management
– Secure, remote configuration and
management
– Performance/provisioning manager
– Quality of Service based on traffic type and
service levels
– Cluster diagnostics tools for single node,
peer-to-peer and network verification
– Switch chassis management
– Error, event and status notifications
SAFETY
– USA/Canada: cTUVus
– EU: IEC60950
– International: CB Scheme
EMC (EMISSIONS)
– USA: FCC, Class A
– Canada: ICES, Class A
– EU: EN55022, Class A
– EU: EN55024, Class A
– EU: EN61000-3-2, Class A
– EU: EN61000-3-3, Class A
– Japan: VCCI, Class A
1 Gb/s, 10Gb/s and 40Gb/s Ethernet
Switch System Family
ENVIRONMENTAL
– EU: IEC 60068-2-64: Random Vibration
– EU: IEC 60068-2-29: Shocks, Type I / II
– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS
– Operating 0°C to 45°C,
Non Operating -40°C to 70°C
– Humidity: Operating 5% to 95%
– Altitude: Operating -60 to 2000m
Highest Levels of Scalability,
Simplified Network Manageability,
Maximum System Productivity
350 Oakmead Parkway, Suite 100
Sunnyvale, CA 94085
Tel: 408-970-3400
Fax: 408-970-3403
www.mellanox.com
© Copyright 2011. Mellanox Technologies. All rights reserved.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, InfiniPCI, PhyX, Virtual Protocol Interconnect and Voltaire are registered
trademarks of Mellanox Technologies, Ltd. FabricIT is a trademark of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
Mellanox continues its leadership by providing
40Gb/s Ethernet Switch Systems – the highest performing
fabric solution for Web 2.0, Enterprise Data Centers,
Cloud Computing and High Performance Computing.
BENEFITS
6024
40GigE Ports
SX1035
SX1036
36
64
64
1U
1U
1U
Switch Capacity
Performance
■
36
48
1U
–
■
480Gb/s
960Gb/s
2.88Tb/s
2.88Tb/s
Non-blocking
Non-blocking
Non-blocking
Non-blocking
Device Management
Y
Fabric Management
Y
Y
Installation Kit
Mellanox makes fabric management as easy as it can by providing
the lowest latency and highest bandwidth. This allows IT managers
to deal with serving the company’s business needs, while solving
typical networking issues such as congestion and the inefficiencies
generated by adding unnecessary rules and limitations when the
network resources are sufficient.
0
24
Height
Y
Y
Y
N
Efficiency
Simple configuration, no need for
QoS (40GigE vs. 10GigE)
Easy Scale
UFM can maintain from 1 to 1000s
nodes and switches
– Configure and manage the data
center from a single location
–
Y
Y
Y
Y
Y
Fans
PS and Fans
PS and Fans
PS and Fans
PSU Redundancy
Y
Y
Y
Y
FAN Redundancy
Y
Y
Y
Y
FRUs
The Ethernet Switch Family delivers the highest performance
and port density with a complete chassis and fabric management
solution enabling converged data centers to operate at any scale
while reducing operational costs and infrastructure complexity.
This family includes a broad portfolio of fixed and modular switches
that range from 24 to 288 ports, and support 1/10 or 40Gb/s per
port. These switches allow IT managers to build cost-effective and
scalable switch fabrics for small to large clusters up to 10’s-ofthousands of nodes.
6048
0
10GigE Ports
■
Elasticity
Low latency on any node
–
■
Arranged and Organized Data Center
40GigE high density switch means
4x less cables
Easy deployment
Easy maintenance
–
–
–
8500
40GigE Ports
■
Unprecedented Performance
Storage and server application runs
faster
–
0
10GigE Ports
288
Height
15U
Switching Capacity
5.76Tb/s
Spine Modules
4
Leaf Modules
12
3-page configuration: 6 pages
Folded
Feature Summary
Mellanox continues its leadership
providing InfiniBand Host Channel Adapters (HCA) —
the highest performance interconnect solution for Enterprise Data Centers,
Web 2.0, Cloud Computing, High-Performance Computing,
and embedded environments.
VALUE PROPOSITIONS
■
■
■
High Performance Computing needs high bandwidth, low latency, and CPU offloads to get the highest server efficiency and application productivity.
Mellanox HCAs deliver the highest bandwidth and lowest latency of any standard interconnect enabling CPU efficiencies of greater than 95%.
INFINIBAND
– IBTA Specification 1.2.1 compliant
– 10, 20, or 40, or 56Gb/s per port
– RDMA, Send/Receive semantics
– Hardware-based congestion control
– 16 million I/O channels
– 9 virtual lanes: 8 data + 1 management
ENHANCED INFINIBAND
– Hardware-based reliable transport
– Collective operations offloads
– GPU communication acceleration
– Hardware-based reliable multicast
– Extended Reliable Connected transport
– Enhanced Atomic operations
HARDWARE-BASED I/O VIRTUALIZATION
– Single Root IOV
– Address translation and protection
– Multiple queues per virtual machine
– VMware NetQueue support
ADDITIONAL CPU OFFLOADS
– RDMA over Converged Ethernet
– TCP/UDP/IP stateless offload
– Intelligent interrupt coalescence
STORAGE SUPPORT
– Fibre Channel over InfiniBand or Ethernet
FLEXBOOT™ TECHNOLOGY
– Remote boot over InfiniBand
– Remote boot over Ethernet
– Remote boot over iSCSI
Data centers and cloud computing require I/O services such as bandwidth, consolidation and unification, and flexibility. Mellanox’s HCAs support LAN
and SAN traffic consolidation and provides hardware acceleration for server virtualization.
COmPLIaNCe
COmPatIbILIty
SAFETY
– USA/Canada: cTUVus UL
– EU: IEC60950
– Germany: TUV/GS
– International: CB Scheme
EMC (EMISSIONS)
– USA: FCC, Class A
– Canada: ICES, Class A
– EU: EN55022, Class A
– EU: EN55024, Class A
– EU: EN61000-3-2, Class A
– EU: EN61000-3-3, Class A
– Japan: VCCI, Class A
– Australia: C-Tick
– Korea: KCC
ENVIRONMENTAL
– EU: IEC 60068-2-64: Random Vibration
– EU: IEC 60068-2-29: Shocks, Type I / II
– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS
– Operating temperature: 0 to 55° C
– Air flow: 100LFM @ 55° C
– Requires 3.3V, 12V supplies
Virtual Protocol Interconnect™ (VPI) flexibility offers InfiniBand, Ethernet, Data Center Bridging, EoIB, FCoIB and FCoE connectivity.
World-Class Performance
Mellanox InfiniBand adapters deliver industry-leading bandwidth with ultra low-latency and efficient
computing for performance-driven server and storage clustering applications. Network protocol processing
and data movement overhead such as RDMA and Send/Receive semantics are completed in the adapter
without CPU intervention. Application acceleration and GPU communication acceleration brings further
levels of performance improvement. Mellanox InfiniBand adapters’ advanced acceleration technology
enables higher cluster efficiency and large scalability to tens of thousands of nodes.
I/O Virtualization
Mellanox adapters utilizing Virtual Intelligent Queuing (Virtual-IQ) technology with SR-IOV provides
dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the
server. I/O virtualization on InfiniBand gives data center managers better server utilization and LAN and
SAN unification while reducing cost, power, and cable complexity.
Storage Accelerated
A consolidated compute and storage network achieves significant cost-performance advantages over
multi-fabric networks. Standard block and file access protocols leveraging InfiniBand RDMA result in highperformance storage access. Mellanox adapters support SCSI, iSCSI, NFS and FCoIB protocols.
Software Support
All Mellanox adapters are supported by a full suite of drivers for Microsoft Windows, Linux distributions,
VMware, and Citrix XENServer. The adapters support OpenFabrics-based RDMA protocols and software,
and the stateless offloads are fully interoperable with standard TCP/UDP/IP stacks. The adapters are
compatible with configuration and management tools from OEMs and operating system vendors.
Virtual Protocol Interconnect
VPI® flexibility enables any standard networking, clustering, storage, and management protocol to
seamlessly operate over any converged network leveraging a consolidated software stack. Each port
can operate on InfiniBand, Ethernet, or Data Center Bridging (DCB) fabrics, and supports Ethernet over
InfiniBand (EoIB) and Fibre Channel over InfiniBand (FCoIB) as well as Fibre Channel over Ethernet (FCoE)
and RDMA over Converged Ethernet (RoCE). VPI simplifies I/O system design and makes it easier for IT
managers to deploy infrastructure that meets the challenges of a dynamic data center.
ConnectX-3
Mellanox’s industry-leading ConnectX-3 InfiniBand adapters provides the highest performing and most
flexible interconnect solution. ConnectX-3 delivers up to 56Gb/s throughput across the PCI Express 3.0
host bus, enables the fastest transaction latency, less than 1usec, and can deliver more than 90M MPI
messages per second making it the most scalable and suitable solution for current and future transactiondemanding applications. ConnectX-3 maximizes the network efficiency making it ideal for HPC or
converged data centers operating a wide range of applications.
Complete End-to-End 56Gb/s InfiniBand Networking
ConnectX-3 adapters are part of Mellanox’s full FDR 56Gb/s InfiniBand end-to-end portfolio for data centers
and high-performance computing systems, which includes switches, application acceleration packages,
and cables. Mellanox’s SwitchX family of FDR InfiniBand switches and Unified Fabric Management
software incorporate advanced tools that simplify networking management and installation, and provide
the needed capabilities for the highest scalability and future growth. Mellanox’s collectives, messaging,
and storage acceleration packages deliver additional capabilities for the ultimate server performance, and
the line of FDR copper and fiber cables ensure the highest interconnect performance. With Mellanox end to
end, IT managers can be assured of the highest performance, most efficient network fabric.
Mellanox InfiniBand Adapters Provide
Advanced Levels of Data Center IT Performance,
Efficiency and Scalability
350 Oakmead Parkway, Suite 100
Sunnyvale, CA 94085
Tel: 408-970-3400
Fax: 408-970-3403
www.mellanox.com
© Copyright 2011. Mellanox Technologies. All rights reserved.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. FabricIT, MLNX-OS and SwitchX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property
of their respective owners.
Mellanox InfiniBand Host
Channel Adapters (HCA) provide
the highest performing interconnect
solution for Enterprise Data
Centers, Web 2.0, Cloud Computing,
High-Performance Computing,
and embedded environments.
Clustered data bases, parallelized
applications, transactional services
and high-performance embedded I/O
applications will achieve significant
performance improvements resulting
in reduced completion time and lower
cost per operation.
Performance Accelerated
CONNECTIVITY
– Interoperable with InfiniBand or 10GigE
switches
– microGiGaCN or QSFP connectors
– 20m+ (10Gb/s), 10m+ (20Gb/s), 7m+ (40Gb/s)
or 5m+ (56Gb/s) of pasive copper cable
– External optical media adapter and active
cable support
– Quad to Serial Adapter (QSA) module,
connectivity from QSFP to SFP+
OPERATING SYSTEMS/DISTRIBUTIONS
– Novell SLES, Red Hat Enterprise Linux (RHEL),
Fedora, and other Linux distributions
– Microsoft Windows Server 2008/CCS 2003,
HPC Server 2008
– OpenFabrics Enterprise Distribution (OFED)
– OpenFabrics Windows Distribution (WinOF)
– VMware ESX Server 3.5, vSphere 4.0/4.1
PROTOCOL SUPPORT
– Open MPI, OSU MVAPICH, Intel MPI, MS MPI,
Platform MPI
– TCP/UDP, EoIB, IPoIB, SDP, RDS
– SRP, iSER, NFS RDMA, FCoIB, FCoE
– uDAPL
BENEFITS
■
World-class cluster performance
■
High-performance networking and storage access
■
Efficient use of compute resources
■
Guaranteed bandwidth and low-latency services
Connector
■
Reliable transport
Host Bus
PCI Express 2.0
■
I/O unification
Features
■
Virtualization acceleration
VPI, Hardware-based Transport and Application Offloads, RDMA, GPU Communication
Acceleration, I/O Virtualization, QoS and Congestion Control; IP Stateless Offload
■
Scales to tens-of-thousands of nodes
Ports
1x20Gb/s
microGiGaCN
2x20Gb/s
microGiGaCN
OS Support
Ordering
Number
1x20Gb/s
1x40Gb/s
2x20Gb/s
2x40Gb/s
1x40Gb/s
1x10Gb/s
QSFP
QSFP
QSFP, SFP+
RHEL, SLES, Windows, ESX
MHGH19B-XTR
MHGH29B-XTR
MHRH19B-XTR
MHQH19B-XTR
MHRH29B-XTR
MHQH29C-XTR
MHZH29B-XTR
TARGET APPLICATIONS
■
■
■
■
■
■
High-performance parallelized computing
Ports
Data center virtualization
1x40Gb/s
1x56Gb/s
1x56Gb/s
1x40Gb/s
QSFP
QSFP, SFP+
Connector
Host Bus
PCI Express 3.0
Latency sensitive applications such as financial
analysis and trading
Features
VPI, Hardware-based Transport and Application Offloads, RDMA, GPU Communication Acceleration, I/O Virtualization, QoS and Congestion Control; IP Stateless
Offload; Precision Time Protocol
Web 2.0, cloud and grid computing data centers
Performance storage applications such as backup,
restore, mirroring, etc.
QSFP
2x40Gb/s
2x56Gb/s
Clustered database applications, parallel RDBMS
queries, high-throughput data warehousing
OS Support
Ordering
Number
RHEL, SLES, Windows, ESX
MHGH19B-XTR
MHGH29B-XTR
MHRH19B-XTR
MHQH19B-XTR
Brochure pages set up on 6 column grid
starting at .5” from all page borders
Mellanox Technologies | July 2011
|
21
CORPORATE Style Guide
Mellanox Icons
These icons are to be used on collateral to designate the industry focus of the document. Industry icons are used on
Case Studies and Solution Briefs.
Bioscience
Cloud Computing
Data Storage
Database
Data Center
Defense
EDC CFD
Enterprise Storage
Financial
Government
Green Computing
Healthcare
Hosting
HPC
Manufacturing
Multimedia
Oil & Gas
Research Education
Risk Analysis
Space Exploration
Transportation
Virtualization
Weather
Web 2.0
Storage
22
|
Mellanox Technologies | July 2011
CORPORATE Style Guide
Product and Illustration Icons
These illustration/icons are to be used on collateral, signage, or where ever appropriate. The elements are
designed for left, center and right facing orientation diagrams.
An action for Illustrator is available for creating the isometric views.
Here are a few samples.
Mellanox Technologies | July 2011
|
23
CORPORATE Style Guide
Product and Illustration Icons
Because of Mellanox’s varied product line, colors are used to designate different types of product capabilities or
technologies. These are used in conjunction with the product and illustration icons in diagrams and application
illustrations. Below is a PPT slide that shows the how the color designations are applied.
Green = Ethernet
Blue = InfiniBand
Orange = Fibre Channel
Gray or uncolored = Generic or unspecified technology
Technology Specific
Ethernet
© 2011 MELLANOX TECHNOLOGIES
24
|
Mellanox Technologies | July 2011
InfiniBand
Fibre Channel
3
CORPORATE Style Guide
Product and Illustration Icons
Below is a sample of a diagram using the illustration icons.
An action for Illustrator is available for creating the isometric views.
InfiniB
and Sto
rage
40Gb/s
InfiniB
and
10GigE
LAN
IS5000
40Gb/s
Switch
Fibre C
hannel
Storag
e
Ethern
et Stora
ge
Mellanox Technologies | July 2011
|
25
For more information on Mellanox Branding standards or
questions on typefaces and style considerations for a particular
application, contact:
Mellanox Marketing Contact:
Brian Sparks, Director of Marketing Communications
Tel: 408-916-0008
Email: [email protected]
350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085
Tel: 408-970-3400 • Fax: 408-970-3403
www.mellanox.com