In the April EMC Support Matrix, EMC will be posting support for NX-OS 5.0(2)N2(1) for the Nexus 5010, 5020 and 5548. In addition, we will also now support the creation of Virtual E_Ports (VE_Ports) on these products. This post discusses what VE_Ports are, the protocol used to instantiate them and the exact configuration steps required to create them.
VE_Ports allow for the formation of FCoE ISLs and the creation of an all FCoE fabric or more precisely a Multihop FCoE fabric. As you will see, once a link between two FCFs has been established using FIP, VE_Ports / FCoE ISLs are initialized using the same protocol that is used to initialize E_Ports / native FC ISLs. Just in case you are interested, the FIP protocol used to establish FCF to FCF connections is described in my previous post, “FIP, FIP Snooping Bridges, and FCFs (Part 1 – “FIP”, the FCoE Initialization Protocol)”.
From a logical connectivity point of view, an all FCoE fabric is identical to an FC Fabric and it supports all of the same functionality such as zoning, a distributed name server as well as support for RSCNs. As a result, the same types of scalability limits apply to both FC and FCoE Fabrics such as the maximum number of; hops, VN_Ports and Domains. The FC scalability limits have been posted in the EMC Support Matrix for years and are currently no more than 5 hops, no more than 6000 N_Ports and no more than 55 Domains. With an all FCoE fabric, I feel comfortable with 3 hops (because I’ve tested it), and have no reason to believe that the number of VN_Ports or Domains will need to be limited to less than what is supported in native FC.
From a physical connectivity point of view, the connectivity options currently supported for an all FCoE Fabric are shown in the following diagram. Each of the connectivity options are explained in detail below the diagram.
For the sake of clarity, the diagram shows the physical connectivity options that can be used within a row of equipment racks. The diagram is not intended to indicate a limitation in the types of topologies that are supported; rather it is just intended to help highlight all of the different possibilities. Each Rack is described in detail below:
End of Row– The End of Row (EoR) cabinet contains the aggregation layer switches. In order to maintain the highly available characteristics of FC, two 5548s are shown and they have not been connected together via FCoE ISLs. This allows for the two fabrics to remain logically isolated. The 5548s may be part of the same VPC Domain.
The 5548s in the End of Row cabinet will support:
- VE_Port (FCoE ISL) connections from other Nexus 5000s,
- VF_Port (ENode) connections from FCoE initiators and targets,
- native FC E_Ports (ISLs) to other FC capable Cisco switches
- F_Port (FC initiator and target) connections from FC initiators and targets.
- N_Port connections to FC capable Cisco/Brocade/QLogic FC switches
- uplinks from an FSB, and
- Fabric Ports to connect to the 2232 FEX module.
Rack 7 & 8 Storage– Storage ports (either FC or FCoE) can be connected to a Top of Rack (ToR) switch or directly back to the 5548 in the End of Row rack (EoR). Connecting a storage port directly to the top of rack switch in a given rack makes sense if the storage port is accessed primarily by servers residing in that rack. If a storage port is going to be accessed by many servers in different racks, the optimal placement of the storage ports may be on the 5548 in the EoR rack.
Rack 5 & 6 FIP Snooping Bridge (FSB)– FIP Snooping Bridges such as the Cisco/IBM 4001i can either be connected to the Nexus 5000 at the top of the rack or connected to the Nexus 5548 at the EoR.
Rack 4 Nexus 5000 (VE_Ports)– The Nexus 5000 can be connected to the 5548 in the EoR via VE_Ports while running in FC-SW mode over FCoE or over FC while running NPV and FC-SW modes. Currently you cannot utilize FCoE for uplinks to the 5548 at the EoR while running in NPV mode.
Rack 2 & 3 Nexus 5000 (VE_Ports)/Nexus 2232 (Fabric Ports)– The Nexus 2232 can be connected to a Nexus 5000 via Fabric ports and then the Nexus 5000 can be connected to the Nexus 5548 at the EoR via E_Ports.
Rack 1 Nexus 2232 (Fabric ports) - The Nexus 2232 at the ToR can be connected to the Nexus 5548 at the EoR via Fabric ports.
A logical representation of the physical topology is shown below.
Although it isn’t shown in the logical topology above, the FCoE ISLs must be on physically separate links rather than trunked on the ethernet uplinks used to carry non-FCoE traffic. This is not a requirement from an FCoE protocol perspective but it is required when using the current version of NX-OS. This requirement may eventually be removed.
An important point to note in the above diagram is that from an FC logical point of view, a Fabric A / Fabric B topology has been maintained. If the Customer desires, the ToR switches could be connected together to allow for the use of vPC and active/active NIC Teaming /Bonding. Although this breaks the “Air gap” separation requirement (more on this later), the FC portion of the network will remain logically isolated.
FCoE VE_Port Virtual Link Instantiation
Note: This section describes the FCoE VE_Port Virtual Link Instantiation process currently being used between Cisco Nexus products. The reason for this is that Cisco is the only FCF vendor currently providing this functionality. When other vendors provide VE_Port functionality, this section will be updated to show the differences (if any).
The Virtual Links used to support the instantiation of Virtual E_Ports (VE_Ports) are created in a manner that is similar to the process used to instantiate a virtual link that supports the instantiation of a Virtual F_Port (VF_Port) as described in my previous post “FIP, FIP Snooping Bridges, and FCFs (Part 1 – “FIP”, the FCoE Initialization Protocol)”. One major difference is that FIP VLAN Discovery is not used. For the sake of this example, the following topology will be used.
FIP Discovery Solicitation
The process starts with both sides of the link transmitting DCBX frames. Once the DCBX parameters have been exchanged, both sides will transmit Solicitations on every VLAN that FCoE has been allowed on. In this example, we will assume that the Ethernet / vFC interface on the Nexus has been configured to allow all VLANs / VSANs. In the diagram below, only the details for the Solicitation frames for VLAN/VSAN 100 are shown. The Solicitations for VLAN/VSAN 200 and 300 would contain similar information.
The Solicitations are transmitted to the multicast Destination Address (DA) of ALL-FCF-MACs. The SA of these frames is the Chassis MAC of the Nexus. The 802.1Q tag will contain the VLAN that the Solicitation is being performed on.
The Available for ELP bit will need to be set to one in order for FIP to proceed to the next phase (FIP Advertisement).
The FCF bit indicates that the Frame was transmitted by an FCF.
The FC-MAP is checked by both sides to ensure that this value matches. If it does not, FIP will not proceed to the next phase and the virtual link will not be instantiated. The FC-MAP value prevents unintentional FC Fabric merges and should be administratively set to a value other than the default if you have multiple FCoE Fabrics in the same data center and you do not want an accidental connection between two FCFs to result in a fabric merge.
The Max FCoE Size field is set to the maximum size FCoE Frame supported by each side.
FCoE Discovery Advertisement
If the Available for ELP bit and FC-MAP values match on both sides of the link, the FIP process will continue with both sides transmitting a Discovery Advertisement.
The information contained within the FIP Discovery Advertisement is similar to what is contained in the Solicitation. One difference worth mentioning is that the Advertisement is padded so that the Frame is equal to Max FCoE size. The presence of this padding allows both ends to validate that the network infrastructure between the two VE_Ports is capable of supporting mini Jumbo frames.
Once both sides have received and validated the information contained within the Advertisements, the Virtual link can be instantiated by both sides transmitting ELP on each VLAN/VSAN that supports FCoE. The frames exchanged to complete the VE_Port instantiation are practically identical to those frames used to initialize an FC E_Port and will not be repeated here. For more information, refer to the Networked Storage Concepts and Protocols TechBook for additional information.
Configuring Cisco VE_Ports
VE_Ports allow for two Cisco Nexus 5000 products to be connected together and merged into the same fabric using FCoE links. VE_Ports can be created as either individual links or as shown in the following example configured to use a Port channel. The topology used in this example is shown below.
On Nexus 5000 A:
1. Create the appropriate VSANs
NEXUS5000A# config t
Enter configuration commands, one per line. End with CNTL/Z.
NEXUS5000A(config)# vsan database
NEXUS5000A(config-vsan-db)# vsan 200
NEXUS5000A(config-vsan-db)# vsan 300
NEXUS5000A(config-vsan-db)# vsan 400
2. Create the appropriate VLANs and associate them with the appropriate VSAN.
NEXUS5000A(config)# vlan 200
NEXUS5000A(config-vlan)# fcoe vsan 200
NEXUS5000A(config-vlan)# vlan 300
NEXUS5000A(config-vlan)# fcoe vsan 300
NEXUS5000A(config-vlan)# vlan 400
NEXUS5000A(config-vlan)# fcoe vsan 400
3. Create the port-channel that will be used to connect the two switches.
NEXUS5000A(config)# interface port-channel 777
NEXUS5000A(config-if)#switchport mode trunk
4. Create the vfc interface that will be associated with the port-channel, specify it as an E_Port and allow the appropriate VSANs.
NEXUS5000A(config)# interface vfc777
NEXUS5000A(config-if)# bind interface port-channel777
NEXUS5000A(config-if)# switchport mode E
NEXUS5000A(config-if)# no shutdown
5. Add the ethernet interfaces to the port-channel
NEXUS5000A(config-if)# int e1/1-2
NEXUS5000A(config-if-range)# switchport mode trunk
NEXUS5000A(config-if-range)# channel-group 777 mode active
6. Repeat steps 1 – 5 on Nexus 5000 B.
7. At this point the VE_Port port-channel should initialize and can be viewed using the show fcoe database command.
NEXUS5000A(config)# show fcoe database
INTERFACE FCID PORT NAME MAC ADDRESS
INTERFACE MAC ADDRESS VSAN
vfc777 00:05:73:af:13:68 200
vfc777 00:05:73:af:13:68 300
vfc777 00:05:73:af:13:68 400
Last Sunday, while I was in the process of writing the technical content of this post, a friend of mine from Cisco sent me an asynchronous notification that J Metz had just posted a blog regarding FCoE Multihop. My initial response was “Darn!” (or some variant thereof), followed by “he beat me to it!” Well, here we are a week later and I’m finally getting around to posting my FCoE multihop post. For the sake of transparency, let me state that because I wanted to avoid contaminating my thought process, this morning was the first time I actually read through the entirety of J’s post and the comments that follow. For the most part, I found myself agreeing with J and Brad, but I understand where Brook is coming from and I think he raises some valid points that need to be seriously considered. One point that I want to stress is this, we don’t care which protocol or topology you use to connect to our storage! We care very much that whatever protocol or topology you do use, you are able to design, implement and maintain it with as little effort (or drama) as possible. As long as this is possible we will strive to support whatever protocols and topology our customers are asking for...
Thanks for reading!