Have you heard about 40 gig Fibre Channel?

By J Michel Metz, FCIA Board of Directors, Cisco Systems.

  • 11 years ago Posted in

FCIA

It may sound strange to think of the Fibre Channel Industry Association discussing Ethernet technologies in a Fibre Channel (FC) solutions guide. After all, when people think of FC, something more than just the protocol comes to mind - the entire ecosystem, management, and design philosophies are part and parcel of what storage administrators think of when we discuss “FC networks.”

There is a reason for this. Over the years, FC has proven itself to be the benchmark standard for storage networks - providing well-defined rules, unmatched performance and scalability, as well as rock solid reliability. In fact, it’s a testament to the foresight and planning of the T11 technical committee that the FC protocol is robust enough to be used in a variety of ways, and over a variety of media.
Did you know, for instance, that the T11 committee has created a number of possible forms for transporting FC frames? In addition to the FC physical layer, you can also run FC over (though not an exhaustive list):
£ Data Center Ethernet
£ TCP/IP
£ Multiprotocol Label Switching (MPLS)
£ Transparent Generic Framing Procedure (GFPT)
£ Asynchronous Transfer Mode (ATM)
£ Synchronous Optical Networking/Synchronous Digital Hierarchy
(SONET/SDH)
Because of this versatility, FC systems can have a broad application for a variety of uses that can take advantage of the benefits of each particular medium upon which the protocol resides.
The 10G inflection point
While using FC on other protocols is interesting, perhaps no technology has intrigued people like the ability to use FC over Layer 2 lossless Ethernet. In this way, FC can leverage the raw speed and capacity of Ethernet for the deployments that are looking to run multiprotocol traffic over a ubiquitous infrastructure inside their data center.
Realistically, 10G Ethernet (10GbE) was the first technology that allowed administrators to efficiently use increasing capacity for multiprotocol traffic. It was the first time that we could:
£ Have enough bandwidth to accommodate storage requirements
alongside traditional Ethernet traffic
£ Have lossless and lossy traffic running at the same time on the
same wire
£ Independently manage design requirements for both non-
deterministic LAN and deterministic SAN traffic at the same time on
the same wire
£ Provide more efficient, dynamic allocation of bandwidth for that
LAN and SAN traffic without starving each other
£ Reduce or even eliminate potential bandwidth waste
How did this work? 10GbE provided a number of elements to achieve this. First, 10GbE allowed us the ability to segment out traffic according to Classes of Service (CoS), within which we could independently allocate pre-deterministic and non-deterministic traffic without interference. Second, 10GbE gave us the ability to pool the capacity and dynamically allocate bandwidth according to that CoS.
Third, consolidating traffic on higher throughput 10GbE media reduces the likelihood of underutilized links. How? Let’s take a simple example for instance. Suppose you have 8GFC links but are currently only using 4G of throughput. You have a lot of room for growth when you need it but for the most part, on a regular basis half of the bandwidth is being wasted.
Consolidating that I/O with LAN traffic and creating policies for bandwidth usage can mean that you would still have that FC throughput guaranteed, but also be able to use additional bandwidth for LAN traffic as well. Moreover, if there is bandwidth left over, bursty FC traffic could use all of the remaining additional bandwidth as well.
Because LAN and SAN traffic is not constant or static, despite what benchmark tests might have us believe, this dynamic approach to running multiple types becomes even more compelling when the bandwidth increases beyond 10G to 40G, and even 100G.
The 40G milestone
There is an old adage, “You can never have too much bandwidth.” If that’s true, then Data Centers are spoiled for choice.
In order to understand just how much throughput we’re talking about, we need to understand that it’s more complex than just the ‘apparent’ speed. Throughput is based on both the interface clocking (how fast the interface transmits) and how efficient it is (i.e., how much overhead there is). The bandwidth threshold is being pushed with technologies that are either available today or just around the corner.
The ability to increase throughput in this way has some significant consequences.
What to do with all that bandwidth?
There are more ways to answer that question than there are Data Centers. Could you dedicate all that bandwidth to one protocol, whether it’s FC or something else? Absolutely. Could you segment out the bandwidth to suit your data center needs and share the bandwidth accordingly? Quite likely.
This is where the true magic of 40GbE (and higher) lies. In much the same way that SANs provided the ability for data centers to make pools of storage more efficient than silo’d disk arrays, converged networks allow storage networks to eliminate the bandwidth silos as well. The same principles apply to the networks as they did to the storage itself.
There are three key facets that are worth noting:
Flexibility
The resiliency of the FC protocol, exemplified by its easy transference from 10G to 40G to 100G Ethernet without the need for further modification, means that there is a contiguous forward-moving path. That is, the protocol doesn’t change as we move into faster speeds and higher throughput. The same design principles and configuration parameters remain consistent, just as you would expect from FC.
But not only that, you have a great degree of choice in how your data centers are configured. Accidentally under-plan for your throughput needs because of an unexpected application requirement? No problem. A simple reconfiguration can tweak the minimum bandwidth requirements for storage traffic.
Have space limitations, or a different cable for each different type of traffic you need? No problem. Run any type of traffic you need - for storage or LAN - using the same equipment and, often, on the same wire. Nothing beats not having to buy extra equipment when you can run any type of traffic, anytime, anywhere in your network, over the same wire.
Growth
Data Centers are not stagnant, despite what we may see on topology diagrams or floor plan schematics. They can expand, and even sometimes they can contract. One thing they do not do, however, is remain static over time.
New servers, new ASICs, new software and hardware - all of these affect the growth patterns of the Data Center. When this happens, the network infrastructure is expected to be able to accommodate these changes. For this reason we often see administrators “expect the unexpected” by over-preparing the data center’s networking capacity, just in case. No one can be expected to predict the future, and yet this is what we ask of our storage and network architects every day.
Because of this even the most carefully designed Data Center can be taken by surprise three, five, or more years down the road. Equipment that was not expected to live beyond its projected time frame is being called upon to work overtime to accommodate capacity requirement increases. Meanwhile, equipment that was “absolutely necessary” remains underutilized (or not used at all) because expected use cases didn’t meet planned projections.
Multiprotocol, higher capacity networks solve both of these problems. No longer do they have to play “bandwidth leapfrog,” where they have too much capacity on one network and not enough on the other (and never the twain shall meet!). Neither do they need to regret installing a stub network that winds up becoming a permanent fixture that must be accommodated in future growth because what was once temporary has now become ‘mission critical.’
Budget
What happens when these needs cannot be met simply because of the bad timing of budget cycles? How often have data center teams had to hold off (or do without) because the needs of the storage network were inconveniently outside the storage budget cycle?
In a perfect world, storage administrators would be able to add capacity and equipment whenever needed, not just because of the dictates of budgetary timing. When capacity is pooled on a ubiquitous infrastructure, however, there no longer has to be a choice between whether the LAN/Ethernet capacity should trump storage capacity. Not every organization has this limitation, of course, but eliminating competition for valuable resources (not “either/or” but rather “and”) not only simplifies the procurement process but also maximizes the money spent for total capacity (not to mention the warm fuzzes that are created between SAN and LAN teams!).
Summary
FC continues to be the gold standard for storage networking, regardless of the underlying transport medium. Now, more than ever before, storage administrators have the most flexibility to deploy reliable, deterministic storage networks with unprecedented choice and agility. With up to 10,000 MB/s bidirectional bandwidth to play with, storage networks can use 40G FCoE to take all of their FC applications beyond what was conceivable only a few years ago.
To learn more about the Fibre Channel Industry Association, please visit www.fibrechannel.org
 

La Molisana, a leading Italian pasta company, selects Hitachi Vantara’s Virtual Storage Platform...
Cerabyte, the pioneering leader in ceramic-based data storage technology, has been awarded a highly...
Innovations for large-scale deployments focused on flexibility, operational efficiency, resilience,...
New study by Splunk shows that a significant number of UK CISOs are stressed, tired, and aren’t...
PagerDuty has released a study that reveals service disruptions remain a critical concern for IT...
NVIDIA continues to dominate the AI hardware market: powering over 2x the enterprise AI deployments...
Hitachi Vantara survey finds data demands to triple by 2026, highlighting critical role of data...
ELTEX, Inc., a pioneer in the e-commerce industry in Japan, has modernised its storage...