« Partner Mac Address: 00:00:00:00:00:00 | Main | Does FCoE require DCB Ethernet? »

02/13/2012

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Etherealmind

Two about things about iSCSI that might be worth considering.

1. Many more people understand TCP/IP, or think they do, than FibreChannel. FibreChannel is a complex technology to learn and be confident about when compared to a technology you use everyday.

2. The storage industry originally embraced iSCSI, only to turn to FC and have been acting like snobs about iSCSI ever since. It's hard to have a rational discussion about FC over provisioning, unnecessary costs, and pointless over specification.

When you point out that iSCSI has many steps it sort of fails to calculate many of the complexities that went into establishing the FC network. Whereas an Ethernet network is much simpler to build and use, and costs less than an FC network.

But, nice article. Does highlight some of the advantages of FC especially for more certain applications where simplicity is important for less errors on the configuration side.

Erik Smith

Hi Greg, thanks for the comment! I agree with your point about the perceived complexity of FC / simplicity of Ethernet. This is one of the reasons that I've been pushing TDZ.

In regards to your second point, I'd place myself firmly in the "recovering FC snob" category of users... I think my prejudice against iSCSI was due mainly to:
1. Customer adoption rates,
2. the bandwidth constraints of 1GbE; and
3. the lack of iSCSI management tools.
However, since:
1. customers are now voting with their pocketbooks in favor of iSCSI,
2. 10GbE is available (if required by the application); and
3. the environments that are using iSCSI don't need the same class of management tools as customers managing larger FC SANs.

clearly I needed to revisit my bias or fall out of touch with reality…

Besides, unless someone defines how to use DCB in an SDN environment, iSCSI is clearly going to be the only choice when there’s a need to perform block I/O in these environments.

Regards, Erik

Anton Kolomyeytsev

"if you require a separate high performance (up to 16GB/s) and fault tolerate SAN, then choose FC or FCoE"

Sorry but this is simply not true... You can easily build iSCSI SAN using MPIO and 10 GbE connections (as many as you wish). Resulting solution would run circles around any FC SAN implementation @ fraction of cost. You may wish to study NetApp and VMware FS Vs. iSCSI results like the one provided below:

https://communities.netapp.com/community/netapp-blogs/virtualstorageguy/blog/2010/01/06/new-vmware-and-netapp-protocol-performance-report

Good luck!

Anton

StarWind Software

Erik Smith

Anton, thanks for taking the time to comment, but I disagree. Your assertion of "..Resulting solution would run circles around any FC SAN implementation @ fraction of cost.", has a few problems:

1. I specifically mentioned 16Gb/s FC, can you explain how 10GbE is going to run circles around 16G FC?
2. FC has no overhead from TCP. While this adds a non-trivial requirement to the network (i.e., must be lossless), the end result to the host is more CPU cycles available for doing real work. I agree that modern CPUs have sufficient capacity to do the extra TCP related work and so achieving line rate transfer with TCP based protocols is not a problem. But they are not the same and FC comes out looking slightly better here. The point is I believe most "real world" performance measurements that compare FC/FCoE and TCP based protocols will show that there's practically very little difference between them today..
3. In regards to cost, I specifically said "if you require a separate high performance (up to 16GB/s) and fault tolerate SAN". The two key pieces of information that you need to re-consider are "Separate" and "Fault Tolerant". For those environments that require these attributes, there would be no cost savings since the networking components are, by definition, separate. In addition Fault Tolerant indicates that you'll have at least two of these separate networks and they'll most likely be constructed from "Director Class products". With these requirements in mind, how are you proposing to provide an iSCSI solution at a (significant) fraction of the cost.

Regards, Erik

Stuart

Interesting article, the FC set up seems a little over simplified, but its still my clear favourite. I have always found iscsi quite simplistic as being ip based no understanding of fabrics is required. Interesting comments around 10Gb networking! Was there any testing around vmw and NFS?

Erik Smith

Hi Stuart, thanks! In regards to FC, did I miss any configuration steps?

Regards, Erik

The_socialist


This is REALLY good.

I mean I'm sure each group would pick on little things.

If I had to pick on something, I would pick on the lack of NFS (which others already did =)

but overall the two ideas:

1.) Look at each protocol from a "how many physical "things" from start to finish to get it working.

2.) Network Centric vs Node Centric

Are REALLY good items to cover and very interesting topics.

I also think item #2 begs for another look from a different angle.

How much time is really spent on the CNA/HCA/HBA->TOR/EOR&||Director vs how much time is spent managing the storage targets themselves.

I think you are on to something here. I really more analysis is needed on "care & feeding" and then a look at how much impact that really has on buying/usage decisions.

Again, nice work guys.

Erik Smith

Thanks Jon, I appreciate the feedback.

The post was intended to compare the provisioning steps for block I/O protocols and as a result NAS wasn't considered.

In addition, I've only started REALLY learning about NAS in the past few weeks. Putting aside the multipathing, performance and security considerations for a moment, I'll say that I've spent some time in the lab working with NAS over the past week or so and I really like it. I was able to configure an NFS server and 4 clients in less than an hour and I started the process without having a clue.

I'm not sure I'll revist the topic because I still don't feel a file to block provisioning comparison makes sense in the general (non-VMware) case.

The CNA/HBA and Network provisioning steps were all that were required in my topology, but your point about more complex topologies requiring additional work is a good one.

Erik

The_socialist

I figured as much on the NFS, I was just looking for something to pick on.

I wasn't clear, I didn't mean complexity due to topology.

I meant that when I was a SAN Admin I found that I spend VERY little time dealing with hbas/switches/directors. The majority of my time was spent managing the arrays. Symms and Lightnings took most of my time. Managing, upgrading, building luns, optimization, replication, backups etc.

As so for me I found that adding NFS or iSCSI access to my arrays didn't really change my job much. Just meant more customers were using my storage.

And in some ways the ethernet based access was easier. For the FC stuff I was on task all the way down to the HBA. For NFS, I got to send the pissed off user to the Network team =)

Chris Greer

Good article. I'm not sure it is an apples to apples configuration. For example, you have VSANs in FCoE but your FC example assumes no VSANS. If you are using VSANS, you have some additional steps in FC of configuring VSAN membership for ports.
Also I would argue you probably should be creating 2 separate zones in 2 different zonesets (assuming you have separate fabrics).

Also, any modern Linux variant and Windows 2008 have the iscsi initiator built-in, so that cuts off like 3 steps in Windows and Linux. Even so, you mention that the install guide for HBA/CNA is a 50 page document so maybe that should be in bold.

Having used all three, I personally think iSCSI is a lot less complex operationally. You eliminate the zoning aspect and thus a whole separate tool to learn and use.
I've also experienced more network issues with iSCSI as compared to FC, but they are generally much easier to troubleshoot with existing tools (like ping, traceroute, etc) versus having to break out one or more FC analyzers and pray you catch the issues. So FC breaks a lot less but when it does it's a lot worse.
I believe the move to iSCSI is probably driven by cost than anything else at this point.

Erwin van Londen

Hi Eric,

I applaud your article and underlines what I've always preached.

FC comes out of the IPI enterprise space with leverage from the things of FDDI. It has been build from the ground up to be able to transfer channel protocols like SCSI,HIPPI,ESCON etc. The fact it also could transport TCP/IP was a bonus but not really spectacular. The IP FCP4 mapping was written within a week. Besides that the different classes of service provides a very adaptable way of transporting upper layer protocols like SCSI, SBB, IP etc. (I think FC is the only protocol which provides a multicast service with guaranteed delivery.)

There is a huge difference between network centric and channel centric transport.

In my presentations and blogs I also pointed out that network people have a more horizontal view of the world whereas storage people have a vertical view. What I mean by this is the network people in general don't really care about IO profiles, block sizes, packet drops, in-order delivery etc but these are extremely important factors to storage people. That's the reason FC is the only protocol which is able to reliably steer these massive amounts of data back and forth. It is sheer impossible to achieve the same amount of data with the same accuracy and reliability with iSCSI as you can with FC.

iSCSI was invented for server folks who needed a simple means of externally attached storage with minor or no investment to overcome the limitations of DAS.

As for FCoE I think my standpoint is pretty clear. I don't really see the benefits above native FC beside the fact you need some fewer adapters and cables BUT from a support and troubleshooting perspective bolting one protocol on top of another is asking for more troubles and a significant longer recovery time especially when two "sorts" of people are involved (networking and storage).

I've seen many examples in my daily job when FCIP is involved where IP networks are used for remote replication. I predict similar issues will happen with FCoE.

Again, Eric, I applaud this post and it proves that from a daily operations and a reliability perspective FC (even more so with TDZ) is a preferred way of managing storage.

Also let me make clear that I'm not saying one protocol or technology is better than the other but they do serve different purposes. Lets keep it that way.

Regards,
Erwin van Londen


Erik Smith

Hi Chris, thanks for taking the time to provide the detailed feedback.

I tried to keep the configurations as simple as possible and only included steps that are required. Since VSANs are not even available on FC switch implementations other than Cisco, I didn't feel this fell into the required category and left those steps out. Same type of response in regards to the zoning configuration I used.

In regards to iSCSI with Linux and Windows, these steps were captured a while ago and it's possible that additional support for the adapters I used has been included since then. However, with Linux in particular, I installed SLES 11 SP1 a few weeks ago and noted that I still had to manually select the iSCSI components in order for them to be installed.

In regards to ease of use, iSCSI definitely has the single management interface advantage, this is a big part of why I've been pushing TDZ on FC/FCoE.

Finally, in regards to troubleshooting IP versus FC/FCoE, I have exactly the opposite opinion! I suspect this has much more to do with background/experience/knowledge rather than anything inherent in the protocol, so I'm willing to agree to disagree on that point.

Regards, Erik

Erik Smith

Hi Erwin, I agree with you on the protocol evolution aspects of your comments. Actually, I'll have to take your word on some of those details because they were before my time.

I think it's fair to say that people who work with block I/O tend to be more concerned with "...IO profiles, block sizes, packet drops, in-order delivery etc..." but there are big pockets of interest in this area in the traditional networking space. One such area is WAN optimization but I'm sure someone like Greg Ferro (see comment above) would be able to articulate it much better than I.

Finally, in regards to your concerns about troubleshooting in an FCoE environment. I'll agree that it can be more difficult at first, but once you get a handle on the basics, it's like anything else..

WAN optimization

These kind of post are always inspiring and I prefer to read quality content so I happy to find many good point here in the post, writing is simply great, thank you for the post.

Skywalker

Need to add "install & configure multipathing software" to the FC host deployment list

Erik Smith

Hi Luke?
In any case, with regards to multipathing... Although it's not very realistic, all three of the protocols can work without multipathing, so I didn't include those steps for any of the protocols.

The comments to this entry are closed.

Disclaimer

  • This is not an official EMC blog.
    The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC nor does it constitute any official communication of EMC.