« A problem with I/O Consolidation… and Network Virtualization? | Main | A new EMC FCoE “Case Studies” TechBook revision is available! »

10/16/2012

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Simon gordon

Do I get in trouble for saying Erik is as always making some good points ? Anyway there is a lot in here and I am ascertain I will make more than one comment/response. Here I want to comment on a few high level topics.

1) I of course agree and indeed further extend what Erik says in terms of choice. I am a firm believer that whilst there will be a long tail of physical FC that over time Ethernet will dominate in the DC and that users have choices with FCoE (BB5 and BB6) iSCSI NFS pNFS and SMB. Indeed virtualization and orchestration makes it easier than ever for the DC to change protocol or use multiple protocols for different purposes. We see this mobility and multiplicity as increasingly common in our deployments.

2) I also completely agree that as with most things in this space implementation will take time, starting on low end devices and niche use cases and moving up line over time, and starting on switches and servers and moving to storage over time. This is good and of course allows people to gain confidence and for wrinkles to be ironed out.

3) convergence has many unintended consequences. One is that we have to think about things differently, we may need solve problems in different ways as a result of convergence. This does not mean the old way was wrong nor does it mean the new way is complex or that we should not change. Another is that we may need to learn or relearn things we know or think we know about our own domain let alone learn more about other domains. I remember that with FCIP iFCP iSCSI and wan acceleration we found that many Ethernet and ip experts did not understand things they needed to understand about their own protocols.

No matter FCoE with BB5, FCoE with FDF with BB6, FCoE with VN2VN with BB6, iSCSI, NFS, pNFS, or SMB, network convergence is a journey where we need to learn. Also there may be both problems in implementation of existing protocols and maybe in new protocols that will need to be fixed over time.

However, Ethernet works, if it did not we would get many cases of corruption in the Datacenter. Ethernet is used for lan backup, iSCSI, NAS, clusters, connectivity between the layers and components in a complex multiserver database deployment. As such I struggle to believe that some of the problems are as bad as they seem and believe that even if they exist the use of good network products and robust best practices will ensure that life in Ethernet world is as safe and as scalable as in the fc world. I'm sure erik would agree that fc is not perfect and works in part through good best practice.

I recently had to correct myself, I sometimes say if you have infinite bandwidth and zero latency in the network you don't need cos/qos. Over the last few months we actually found this was not true and that you still need a very good network to avoid the new problems we are finding with modern DataCentres. Erik's concerns are real, but some may be misplaced, others may be readily solvable with the rut deployment model, others may as he notes need protocol or product enhancements.

Erik

Hey Simon, no complaints from me! Thanks for the mention...

Just curious, which concerns are real and which may be misplaced?

Erwin van Londen

Hi Erik, Simon,

All good points and, when looking at the intention of VN2VN, the development seems to be mainly focused on cost-control and easy-of-deployment. Although I'm all for this noble goal I do think we're being entangled in this ever so encompassing triangle of cost-control vs. availability vs. performance. You can't have them all three and leaning towards cost-control will always be at the expense of another.
Especially when looking at very simple architectures like the ones you've depicted above, you must agree that "playing" with customer data like this needs some very significant boost in RAS development to prevent such issues to occur, ever.
I still see some significant issues with FCoE in general and not only from a technical standpoint. Then again, Rome wasn't build in a day so we're likely be in business for a long time. :-)

Regards,
Erwin

Erwin van Londen

Simon,

One more comment. You mention that Ethernet is ubiquitous in the datacentre and it "just works" however you also know that in order to bring reliability in such a lossy protocol it needs a stability factor which has been bolted on in the form of TCP/IP. It's not the Ethernet side that provides all the examples you mentioned (iSCSI, NAS, NFS Clusters etc.) It is TCP/IP that has made all this possible. Around 95% of the worlds wire does not run Ethernet (Framerelay and other "telco" protocols are very much alive around the world connecting continents, countries and cities. No Ethernet here.) but they all do run TCP/IP.

If the development of FC had two additional efforts whereby multicast and broadcast was enhanced to allow for greater scalability plus the fact that all FC vendors had brought down the cost of FC to the level of Ethernet then FC would have had all the option to run a total consolidation of all protocols in the datacentre. As you know the FC4 mapping is the most flexible and easiest to adopt upper layer protocol like it already has done for SCSI, IP, HIPPI, IPI, SBCCS, ATM etc etc...

Anyway, just my $0.02

Kind regards,
Erwin

Manoj


Hi Eric

Very good post and discussion.

Based on the discussion and last post from Erwin, I started to think that behaviors in FCoE case and FC case are not fundamentally different.
1. Both forward bad packets when in cut-through mode.
2. Both handle error condition either at store-forward switch on the path or at the end node (receiver) - bad packet will be discarded (assuming CRC catches the error).

Do you agree? Or am I missing something?

(There is difference about handling of unresolved broadcast in case of Ethernet - but I am assuming same behavior for #1 and #2 will take place on all the unresolved broadcast paths).


- Manoj

Erik

Hi Manoj, I agree with the similarities you explicitly point out, but there are a couple of important differences between the FC and the FCoE case.

With FC there is no concept of a unicast flood. As a result, corrupted FC frames will not be forwarded to every single N_Port that exists on some default VSAN. Also with FC, there are zoning mechanisms in place that could prevent forwarding data to unintended recipients.

With FCoE, in the unlikely event that the wrong bit gets flipped, you could end up with SCSI Data being unicast flooded to every Ethernet end station that is sitting on the default VLAN. The problem is that this data could be visible to anyone via something like TCPdump or wireshark.

Victor Lama

Erik:

Outstanding post, as usual. Informative, lucid and rational. No technical bigotry or emotional gobbledygook. Just the facts. :-)

Quick question (maybe not so quick, sorry):

In an architecture that includes appliances that only support BB5, is it possible to connect an FCoE initiator (server CNA) and an FCoE target (storage array to an FCF in NPV/Access gateway mode, that is NOT connected to a FC switch for FLOGI and PLOGI services, and actually have this work? In other words, picture a Dell M8428-k FCoE blade switch (FCF) in Access Gateway mode that is NOT connected to an FC switch. Or perhaps a UCS Fabric Interconnect with an FCoE target plugged directly into it while in NPV mode and NOT connected to an FC switch.

My thought is that, with an FCF in NPV/Access Gateway mode that is also NOT connected to an FC appliance that can provide FIP FLOGI and PLOGI services, I am not sure how VN-Port to VN_Port communication can take place. The FIP FLOGI and PLOGI semantics do NOT go away. In other words, upon initialization, a VN_Port must discover the FCF (maybe the FCoE VLAN, too), log into the fabric, receive an FC-ID and FPMA, and then register with the name server and perform peer discovery as part of the PLOGI process. Without an FCF in full fabric mode or an FCF in NPV/Access gateway mode that is not connected to an FC switch, how the devil can you get this to work?!?!

Seems logical that you would need to have the FCF provide full fabric services OR have it connected to an FC switch. HOWEVER, I have been told by some UCS experts, for example, that one CAN connect an FCoE array directly to a UCS Fabric Interconnect in End-host/NPV mode and actually be able to provision storage. Again, the FI is supposedly NOT connected to an upstream FC switch.

I have also been told that the Dell (Brocade, really) M8428-k in AG mode, for example, would also support VN_Port to VN_Port communication, even if it, too, is not connected to an FC switch!

What the devil is going on?? :-)

Victor Lama

Erik

Hi Victor, comments inline and marked with ES -:

Erik: Outstanding post, as usual. Informative, lucid and rational. No technical bigotry or emotional gobbledygook. Just the facts. :-)

ES - Thanks! This is exactly the type of information that I am trying to provide…

Quick question (maybe not so quick, sorry): In an architecture that includes appliances that only support BB5, is it possible to connect an FCoE initiator (server CNA) and an FCoE target (storage array to an FCF in NPV/Access gateway mode, that is NOT connected to a FC switch for FLOGI and PLOGI services, and actually have this work? In other words, picture a Dell M8428-k FCoE blade switch (FCF) in Access Gateway mode that is NOT connected to an FC switch. Or perhaps a UCS Fabric Interconnect with an FCoE target plugged directly into it while in NPV mode and NOT connected to an FC switch. My thought is that, with an FCF in NPV/Access Gateway mode that is also NOT connected to an FC appliance that can provide FIP FLOGI and PLOGI services,

ES - The answer is no. With FC-BB-5 ENodes, an FCF must be present and it either needs to handle the FIP FLOGI itself (such as when the FCF is running in FC-SW mode) or it needs to utilize the services of another device to service the FIP FLOGIs (such as when the FCF is running in NPV mode).

I am not sure how VN-Port to VN_Port communication can take place. The FIP FLOGI and PLOGI semantics do NOT go away. In other words, upon initialization, a VN_Port must discover the FCF (maybe the FCoE VLAN, too), log into the fabric, receive an FC-ID and FPMA, and then register with the name server and perform peer discovery as part of the PLOGI process. Without an FCF in full fabric mode or an FCF in NPV/Access gateway mode that is not connected to an FC switch, how the devil can you get this to work?!?!

ES - The short answer is this won’t work until VN2VN is supported by both the host and the storage.

Seems logical that you would need to have the FCF provide full fabric services OR have it connected to an FC switch. HOWEVER, I have been told by some UCS experts, for example, that one CAN connect an FCoE array directly to a UCS Fabric Interconnect in End-host/NPV mode and actually be able to provision storage. Again, the FI is supposedly NOT connected to an upstream FC switch. I have also been told that the Dell (Brocade, really) M8428-k in AG mode, for example, would also support VN_Port to VN_Port communication, even if it, too, is not connected to an FC switch! What the devil is going on?? :-) Victor Lama

ES - In regards to the M8428-k, if it’s running in AG mode, then you are absolutely correct. You need to have a core switch to provide the FC services. In regards to the UCS, there are two different End-host modes, one for Ethernet and one for FC. I’m pretty sure it’s possible to run Ethernet in end-host mode while running FC in FC-SW mode. Perhaps this could explain the confusion?

The comments to this entry are closed.

Disclaimer

  • This is not an official EMC blog.
    The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC nor does it constitute any official communication of EMC.