If you work with Fibre Channel on a regular basis and have come to depend on its excellent performance attributes, you might be surprised to learn that many of the cases handled by our SAN team in Customer Support are related to performance problems. Typically, as we dig into these cases, we discover the majority of them are not actually caused by the FC switches themselves, but rather by end devices that are not performing as expected and are causing fabric wide congestion. These misbehaving end devices are referred to as “slow drains” and they represent a well-known class of problem that is inherent to all lossless transports including FC, DCB Ethernet (e.g., FCoE, RoCE) and even wastewater drain systems (e.g., “Plumbing”).
For example, let’s assume that we have a pipe that is capable of transporting 16Gps (Gallons per second) but an outlet that only allows 8Gps. As long as the amount of water flowing through the pipe is less than or equal to 8Gps, the water will flow out of the pipe at approximately the same rate as it flows into it. As you would probably expect, since there is 16Gps of capacity in the pipe, the rate at which water flows into the pipe could exceed 8Gps for short periods of time and the pipe could act as a kind of buffer. You would also probably expect that if you were to consistently exceed an average of 8Gps, you would quickly notice the water level in the pipe starting to rise.
The same sort of problem can happen within a FC SAN for a number of reasons, including:
- A mismatch between the maximum link speeds supported by the Initiator and the Target
- The use of Fan in / Fan out
- Using an oversubscribed ISL
- An end device not accepting data at the same rate at which it’s arriving (e.g., acting as a slow drain)
It’s important to note that in all of the above scenarios, Buffer-to-Buffer flow control is used to ensure that there is sufficient buffering available on the receiver to store all frames put onto the wire by the transmitter. As a result, the transmitter experiences congestion whenever it is unable to transmit a frame due to a lack of buffer at the other end of the link. For example, as shown below, congestion has spread from the Target back towards the Initiator due to a mismatch in data rates.
The impact that the problem being shown above will have on the rest of the fabric might not be obvious at first glance but let’s see what happens with a slightly more complicated topology. I’ll start by adding another Initiator and Target pair (e.g., Initiator 2 and Target 2). Note that the receive queue on switch A contains 3 frames, one for Target 1 and two for Target 2. Since there is ample space in the receive queue on switch A, the Initiators attached to Switch B are probably not experiencing congestion very often.
However, if I slow down the rate at which Target 1 is pulling frames from its queue, you notice that something very interesting happens as shown below.
Note that Target 1 appears to be receiving frames at a rate faster than it can process them and as a result its local queue is filled. This in turn causes the receive queue on switch A to fill with frames destined to Target 1 since the frames destined to Target 2 are transmitted soon after they are received. In extreme cases, such as the one shown above, this congestion can spread all the way back to the Initiators and prevent them from transmitting frames. When things get to this point we typically see two basic scenarios:
- Target 1 is truly stuck and not releasing ANY credits. In this situation, this condition can persist for up to 2 seconds at which point the switch will initiate the Link Reset protocol.
- Target 1 is not completely stuck, and as illustrated in the first diagram, is instead only capable of processing frames at a rate which is slower than the rate at which they are arriving. In this case, the impact to the rest of the environment can be both significant and long lived, at least until the condition is detected and resolved.
While the above scenarios may seem a bit extreme, they can and do happen in our customer’s environments. As you can imagine, when these events occur they can be very painful because they have such a wide impact and can be somewhat hard to troubleshoot. In addition, the reason that a particular end-device (both Initiators and Targets) act as a slow drain is not very well understood at this point in time. As a result, we decided to come up with a process to help identify, troubleshoot, resolve and prevent slow drain devices; an overview this process can be found in EMC KB 211918 and is shown below for reference.
Our hope is that our customers will use this process to help them detect the presence of potential slow drain devices before they become a significant problem and impact their environments. There's also a ton of background information and links to deep dives on the different concepts involved in case you need it.
Special thanks to Dennis Makishima and Howard Johnson from Brocade for all of the work they have done to get to the bottom of the slow drain issues being observed and enabling us to start addressing these issues.
Thanks for reading!