Understanding the Performance and Management Implications of

58 Slides2.43 MB

Understanding the Performance and Management Implications of FICON/FCP Protocol Intermix Mode (PIM) CMG Canada 14 April 2009 Dr. Steve Guendert Brocade Communications [email protected]

Abstract FICON/FCP protocol intermix mode (PIM) in a common storage network has been supported by IBM since early 2003 yet has not seen widespread adoption among end users for a variety of reasons. Recent developments such as the new IBM System z10, Node Port Identifier Virtualization (NPIV), virtual fabrics, and advances in storage networking management make PIM a more compelling technological strategy for the end user to enable better utilization of capacity and operational cost savings. April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 2

Introduction-Agenda PIM Basic concepts Why intermix? (why not?) Integrating System z and open systems servers Integrating System z using z/OS and zLinux FCP channels on the mainframe NPIV Virtual Fabrics Best Practice Recommendations Conclusion April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 3

Key References S. Guendert. Understanding the Performance and Management Implications of FICON/FCP Protocol Intermix Mode (PIM). Proceedings of the 2008 CMG. Dec 2008. I. Adlung, G. Banzhaf et al. “FCP For the IBM eServer zSeries Systems: Access To Distributed Storage”. IBM Journal of Research and Development. 46 No.4/5, 487-502 (2002). American National Standards Institute. “Information Technology-Fibre Channel Framing and Signaling (FC-FS).” ANSI INCITS 373-2003. G. Bahnzhaf, R. Friedrich, et al. “Host Based Access Control for zSeries FCP Channels”, z/Journal 3 No.4, 99-103 (2005) S. Guendert. “Next Generation Directors, DASD Arrays, & Multi-Service, Multi-Protocol Storage Networks”. z/Journal, February 2005, 26-29. S. Guendert. “The IBM System z9, FICON/FCP Intermix, and Node Port ID Virtualization (NPIV). NASPA Technical Support. July 2006, 13-16. G. Schulz. Resilient Storage Networks. pp78-83. Elsevier Digital Press. Burlington, MA 2004. S. Kipp, H. Johnson, and S. Guendert. “Consolidation Drives Virtualization in Storage Networks”. z/Journal. December 2006, 40-44. S. Kipp. H. Johnson, and S. Guendert. “New Virtualization Techniques in Storage Networking: Fibre Channel Improves Utilization and Scalability.” z/Journal, February 2007, 40-46 J. Srikrishnan, S. Amann, et al. “Sharing FCP Adapters Through Virtualization.” IBM Journal of Research and Development. 51 No. ½, 103-117 (2007). April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 4

PIM Basic Concepts

What is FICON/FCP Intermix? Historically it has meant intermix at the connectivity layer-i.e. on the same directors, switches, and fibre cable infrastructure. It really has not referred to intermix of mainframe and open systems disk storage on the same array. This has now changed. – Subject unto itself, beyond scope of this paper/presentation April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 6

The Fibre Channel Protocol and architecture Protocol Mapping Layer Upper Level Protocol (ULP) FC-4 FC-3 FC-2 FC-1 FC-SB-2/3 FICON and FCP Common Services Framing Protocol / Flow Control Transmission Protocol - Encode / Decode FC-0 Interface/Media – The Physical Characteristics FCP/FICON/HIPPI/Multi-media, etc. Login Server, Name Server, Alias Server Data packaging, Class of service, Port Login / logout, Flow control. Serial Interface (one bit after another) Frame Transfer (up to 2048 byte payload) 8b/10b data encode / decode Cables, Connectors, Transmitters & Receivers. Fibre Channel Architecture – An integrated set of rules (FC-0 thru FC-4) for serial data transfer between computers, devices and peripherals developed by INCITS (ANSI). FCP and FICON are just a part of the upper layer (FC-4) protocol They are compatible with existing lower layers in the protocol stack FC-SB-2 standard used for single byte FICON,FC-SB-3 standard used for FICON Cascading April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 7

Why Intermix? (and why not?)

Why? Reason 1 ESCON is still out there, but for how long? – May 2008 zJournal survey 42% of FORTUNE 1000 still have an install base (mainframe storage) that is ESCON attached. – Dec 31, 2004/2009 Extensive experience with open systems fibre channel SANs. – Use for testing April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 9

Why? –Reason 2 Non-production environments that are specialized – Require more flexibility – Require resource sharing Examples: – Quality Assurance – Test/development – Dedicated DR facility April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 10

Why? Reason 3 What if we could merge both networks? The Hardware is the same We can use: – Common Infrastructure – Common Equipment – Common Management – Common IT Staff Lower Total Cost of Ownership April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 11

Why?-Reason 4: System z10 FICON Director IBM is encouraging System z10 customers to consolidate open systems servers onto their z10 via zLinux. IBM Project Big Green Z10 and zLinux with NPIV make a compelling case for PIM April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 12

Why not PIM? The two party system Politics enters into everything Mainframe vs. open systems – Clash of cultures – Large companies-tend to keep everything separate – Others-may have open systems storage and mainframe storage under same management April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 13

Open Systems and Mainframe Culture Clash Open Systems –EASE of DEPLOYMENT is king: – Its history has been built on how fast / can I reboot! – Plans are made for regular scheduled outages – The Systems Administrator typically is not very concerned with how frames are routed – Solution has to work but predictability of performance is not a mantra April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 14

Open Systems and Mainframe Culture Clash Mainframe – In this world PREDICTABILITY is king: – NEVER want to suffer an unscheduled outage – MINIMIZE or eliminate scheduled outages – The Systems Programmer, will control EVERYTHING ! Including frame routing – Wants predictability and stability when a workload is moved from one set of resources to another – and to measure what’s currently going on – Probably won’t make much use of other FC layers to route frames, anytime soon, because of fear of losing predictability (RMF, SMF, etc) – Needs to be able to influence ‘Network Connectivity’ so ISL usage is a big concern to these professionals April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 15

Examples of end user implementations of PIM Small FICON and open systems environments using a common storage network. z/OS servers accessing remote FICON storage via FICON cascading. Linux on zSeries running z/OS to access local storage. Hardware based remote DASD mirroring between sites using FCP as transport. Open systems servers accessing storage on a FICON director using FCP Linux on the zSeries using FCP to access storage. April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 16

Integrating System z hosts and open systems servers

Considerations for Mixing FICON AND FCP Because both FICON and Open Systems Fiber Channel (FCP) are FC4 protocols, the differences are not relevant until the user wants to control the scope of the switching through zoning or connectivity control. For example, Name Server zoning used by FCP devices provides fabricwide connection control, while PDCM (Prohibit Dynamic Connectivity Mask) connectivity control used by FICON devices provides switch-wide control. April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 18

Mainframe-Definition Oriented Definition-oriented, address centric, host assigned Planning is everything Change control If all the elements of the link have not been defined in IOCP, the connection simply does not exist. April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 19

Open Systems-Discovery Oriented Discovery-oriented, fabric assigned, name-centric – Use of Fibre Channel name server to determine device communication – No pre-definition (IOCP) needed for open operating systems – OS “walks through” addresses and looks for devices – Use of zoning, and different levels of binding for security Fabric binding Switch binding Port binding April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 20

Mixing FICON AND FCP: 4 Factors to Consider Switch management: determine how the switch is managed. Management limitations: determine the limitations and interactions of the management techniques used for each protocol type. Address difference: The next step is to understand the implications of port addressing in FICON versus port numbering in FCP. FICON, like ESCON, abstracts the concept of the port by creating an object known as the port address. This concept is foreign to FCP. Zoning. Consider whether to keep FICON in one zone and FCP in another zone. April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 21

Mixing FICON AND FCP: Factors to Consider (continued) Once these four steps are completed, the user is ready to create an intermix environment based on the SAN requirements. The key decisions are: – Determining the access needs for the fabric. – Determining the scope of FICON support required. – Determining what devices require an intermix of FICON and FCP. April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 22

PDCM Zoning and PDCM considerations The FICON Prohibit Dynamic Connectivity Mask (PDCM) controls whether or not communication between a pair of ports in the switch is prohibited or allowed. If there are any differences in restrictions set up with Zoning and PDCM, the most restrictive rules are automatically applied. April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 23

PDCM The FICON Prohibit Dynamic Connectivity Mask (PDCM) controls whether or not communication between a pair of ports in the switch is prohibited or allowed Block versus Prohibit – Blocking causes the firmware to send a continuous “offline” sequence to the port Useful to report the link as inactive after varying a device off on the mainframe – Prohibit causes the firmware to “prevent” connectivity between the ports Useful to force FICON traffic over specific ISL’s April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 24

PDCM 2. Choose ports to block or prohibit and then activate to save the changes April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 25

FCP channels on the mainframe

FICON and FCP Mode A FICON channel in Fibre Channel Protocol mode (which is CHPID type FCP) can access FCP devices: – From a FICON channel in FCP mode through a single Fibre Channel switch or multiple switches to a SCSI device The FCP support enables z/VM, z/VSE, and Linux on System z to access industry-standard SCSI devices. For disk applications, these FCP storage devices use Fixed Block (512-byte) sectors instead of Extended Count Key Data (ECKD) format. FICON Express4, FICON Express2, and FICON Express channels in FCP mode provide full fabric attachment of SCSI devices to the operating system images, using the Fibre Channel Protocol, and provide point-topoint attachment of SCSI devices. April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 27

FICON and FCP Mode (Continued) The FCP channel full fabric support enables switches and directors to be supported between the System z server and SCSI device, which means many “hops” through a storage area network (SAN). FICON channels in FCP mode use the Queued Direct Input/Output (QDIO) architecture for communication with the operating system. HCD/IOCP is used to define the FCP channel type and QDIO data devices. There is no definition requirement for the Fibre Channel storage controllers and devices in IOCP, nor the Fibre Channel devices such as switches and directors because of QDIO. April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 28

Integrating System z using z/OS, zLinux and Node Port ID Virtualization (NPIV)

Linux on System z Linux on System z is ten years old in 2009 Virtualization is a key component to address IT’s requirement to control costs yet meet business needs with flexible systems System z Integrated Facility for Linux (IFL) leverages existing assets and is dedicated to running Linux workloads while containing software costs Linux on System z allows you to leverage your highly available, reliable and scalable infrastructure along with all of the powerful mainframe capabilities Your Linux administrators now simply administer Linux on a “Big Server” April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 30

zSeries/System z server virtualization zSeries/System z support of zLinux – Mainframe expanded to address open system applications – Linux promoted as alternative to Unix – Mainframe operating system virtualization benefits Availability, serviceability, scalability, flexibility Initial zSeries limits – FCP requests are serialized by the operating system FCP header does not provide image address FICON SB2 header provides additional addressing – Channel ports are underutilized – Resulting cost/performance benefit is not competitive April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 31

The road to NPIV LUN Access Control – Gives end user ability to define individual access rights to a particular device or storage controller – Can significantly reduce the number of FCP channels needed to provide controlled access to data on FCP SCSI devices. – Not the ideal solution: did not solve the 1:1 ratio Alternatives looked at by IBM included: – FC process associators – Hunt groups/multicasting – Emulating sub fabrics Finally settled on NPIV April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 32

A Simplified Schematic Linux using FCP on a System z10 without NPIV Line Card System z10 Linux Partition Linux A Linux B A B C Linux C D Linux D April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) A One FCP Channel Per Linux B C D No parallelism so it is very difficult to drive I/O for lots of Linux images Probably very little I/O bandwidth utilization 2009 Brocade Communications Systems, Inc. All Rights Reserved. B48000 or DCX Chassis 200 - 800 MBps per port 33

Server Consolidation-NPIV N Port Identifier Virtualization (NPIV) – – – – Mainframe world: unique to System z9 and System z10 zLinux on System z9/10 in an LPAR Guest of z/VM v 4.4, 5.1 and later N Port becomes virtualized Supports multiple images behind a single N Port – N Port requests more than one FCID FLOGI provides first address FDISC provides additional addresses – All FCID’s associated with one physical port Fabric Login OS Applications Fibre Channel Address Fabric Discover Fibre Channel Address April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 34

System z N-port ID Virtualization FC-FS 24 bit fabric addressing – Destination ID (D ID) Domain Area Identifies the Switch Number Up to 239 Switch Numbers Identifies the Switch Port Up to 240 ports per domain AL PA, assigned during LIP, Low AL PA, high Priority 1 byte 1 byte 1 byte AL (Port) FICON Express2 and Express4 adapters now support NPIV Domain Switch 1 byte April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) Port @ Virtual Addr. CU Link @ 1 byte 00 - FF 1 byte 2009 Brocade Communications Systems, Inc. All Rights Reserved. 35

A Simplified Schematic Line Card Linux using FCP on a System z10 with NPIV System z10 Linux Partition Linux A Linux B Linux C A B C D Linux D April 14 2009 CMG Canada One FCP Channel for many Linux images D C B Much better I/O bandwidth utilization per path Lots of Parallelism Dr. Steve Guendert Understanding Protocol Intermix (PIM) A 2009 Brocade Communications Systems, Inc. All Rights Reserved. B48000 or DCX Chassis 36

NPIV summary NPIV allows multiple zLinux “servers” to share a single fibre channel port – Maximizes asset utilization Open systems server ROT is 10 MB/second 4 Gbps link should support 40 zLinux servers from a bandwidth perspective NPIV is an industry standard April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 37

Virtual Fabrics An example scenario to explain the technology

Data Center Fabric Consolidation Motivating Factors Unorganized SAN Growth – Organic growth of SANs is creating large physical SAN infrastructures – The need to merge data centers produces larger SANs – Acquisition of data centers forces SAN expansion Controlling the growth motivates virtualization – Simplified management – Local administration – Access to centralized services April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 39

Data Center Network Independent Cascaded Fabrics Site A OS Servers OS Storage Site B OS Servers Fabric #1 11 21 OS Storage Fabric #2 DASD System Z 12 13 22 Backup Fabric DASD 23 System Z Tape April 14 2009 CMG Canada Tape Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 40

Before Consolidation Port Count and Utilization Rate April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 41

Consolidated Servers Merge Open System Servers onto zSeries Site A Site B Mainframe Applications Mainframe Applications Fabric #1 OS Applications 11 System Z OS Storage DASD 21 OS Applications Fabric #2 12 13 System Z 22 Backup Fabric OS Storage DASD 23 Tape April 14 2009 CMG Canada Tape Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 42

Server Consolidation N Port Identifier Virtualization System Z10 Mainframe Applications OS Applications Fabric #2 OS Applications April 14 2009 CMG Canada Fabric #1 Backup Fabric Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 43

Fabric Consolidation Technology Virtual Fabric Configuration – Logical Fabrics and Logical Switches Utilizes frame tagging to create virtual fabrics and virtual links April 14 2009 CMG Canada CUP CUP CUP CUP Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 44

Virtualized Network Next-Generation Logical Fabrics System Z10 Mainframe Applications OS Applications OS Applications April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) Virtual Switch 1 Logical Fabric #1 Virtual Switch 2 Logical Fabric #2 Virtual Switch Backup Logical Fabric Backup 2009 Brocade Communications Systems, Inc. All Rights Reserved. 45

Consolidated Network Logical Fabrics Site B Site A Mainframe Applications Mainframe Applications Logical Fabric 1 OS Applications System Z 11 12 21 Logical Fabric 2 OS Applications System Z 22 OS Storage OS Storage Logical Fabric Backup DASD 13 23 DASD Tape Tape April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 46

Link Consolidation Technology Virtual Fabric Identifier (VFID) – Fabric is virtualized Supports multiple common domains on the same switch – Additional addressing identifies virtual fabric Supports shared fabric traffic on single link April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 47

Expanded Fibre Channel Addressing Start of Frame Virtual Fabric Tagging Header With 12-bit VF ID FC-IFR Encapsulation Header Inter-Fabric Routing Header With 12-bit Source F ID and Destination F ID Fibre Channel Header With 3 Byte D ID 4,096 Virtual Fabric Identifiers Encapsulation Header - Identical to FC Header 4,096 Fabric Identifiers FC Header Data Field End of Frame April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 48

Virtual Fabric Tagging Site A in Denver Virtual Switch 1 Virtual Switch 2 Virtual Switch Backup Tagging Logic Physical Ports Site B in Englewood Virtual Switch B2 Virtual Switch 4 Virtual Switch 3 Tagging Logic Physical Ports Long Distance ISLs with Virtual Fabric Tagging April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 49

Consolidated Data Center Network Tagging ISLs Site A Site B Logical Fabric #1 13 April 14 2009 CMG Canada Logical Fabric #2 Logical Fabric Tagging ISL Dr. Steve Guendert Understanding Protocol Intermix (PIM) Logical Fabric Backup 21 Tagging Logic (XISL) 12 Tagging Logic (XISL) 11 2009 Brocade Communications Systems, Inc. All Rights Reserved. 22 23 50

After Consolidation Port Count and Utilization Rate April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 51

Best Practice Recommendations and Conclusion

Best practice recommendations for PIM Upgrade SW and FW of directors/switches to a common release level. Organize by identifying ports as FICON or FCP Good zoning practices Use the PDCM Use fabric, switch or port binding Consolidate with virtual fabrics and/or NPIV April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 53

Conclusions Recent System z enhancements make PIM much more viable, attractive, and realistic. Recently developed and supported standards make consolidation and virtualization simpler. PIM, NPIV and virtual fabrics all play into grid computing, cloud computing and computing on demand. Green Consolidation and green-cost savings April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 54

References

References I. Adlung, G. Banzhaf et al. “FCP For the IBM eServer zSeries Systems: Access To Distributed Storage”. IBM Journal of Research and Development. 46 No.4/5, 487-502 (2002). American National Standards Institute. “Information Technology-Fibre Channel Framing and Signaling (FC-FS).” ANSI INCITS 373-2003. G. Bahnzhaf, R. Friedrich, et al. “Host Based Access Control for zSeries FCP Channels”, z/Journal 3 No.4, 99-103 (2005) S. Guendert. “Next Generation Directors, DASD Arrays, & Multi-Service, Multi-Protocol Storage Networks”. z/Journal, February 2005, 26-29. S. Guendert. “The IBM System z9, FICON/FCP Intermix, and Node Port ID Virtualization (NPIV). NASPA Technical Support. July 2006, 13-16. G. Schulz. Resilient Storage Networks. pp78-83. Elsevier Digital Press. Burlington, MA 2004. S. Kipp, H. Johnson, and S. Guendert. “Consolidation Drives Virtualization in Storage Networks”. z/Journal. December 2006, 40-44. S. Kipp. H. Johnson, and S. Guendert. “New Virtualization Techniques in Storage Networking: Fibre Channel Improves Utilization and Scalability.” z/Journal, February 2007, 40-46 J. Srikrishnan, S. Amann, et al. “Sharing FCP Adapters Through Virtualization.” IBM Journal of Research and Development. 51 No. ½, 103-117 (2007). April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 56

Standards and NPIV FC-LS – Describes FDISC use to allocate additional N Port IDs in Section 4.2.32 – Service Parameters for FDISC are described in Section 6.6 – NV Ports are treated like any other port Exception is they use FDISC instead of FLOGI – Documents the responses to NV Port related ELSs in section 6.4.5 FDISC, FLOGI and FLOGO – http://www.t11.org/t11/docreg.nsf/ufile/06-393v6 FC-DA – Profiles the process of acquiring additional N Port IDs in Clause 4.9 – http://www.t11.org/t11/docreg.nsf/ufile/04-202v2 FC-MI-2 – Profiles how the fabric handles NPIV requests New Service Parameters are defined in Section 6.3 Name Server Objects in 7.3.2.2 and 7.3.2.3 – http://www.t11.org/t11/docreg.nsf/ufile/04-109v4 FC-GS-5 – Describes Name Server queries in 5.2.5 Permanent Port Name and Get Permanent Port Name command Based on the N Port ID (G PPN ID) April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 57

Standards and Virtual Fabrics and InterFabric Routing Virtual Fabrics – FC-FS-2 Overview of Virtual Fabrics and the Virtual Fabric Tag Header in 10.2 http://www.t11.org/t11/docreg.nsf/ufile/06-085v3 – FC-LS Virtual Fabrics Bit in Common Login Parameters in 6.6.2 Exchange Virtual Fabric Parameters in 4.2.43 http://www.t11.org/t11/docreg.nsf/ufile/06-393v6 Inter-Fabric Routing – FC-FS-2 Inter-Fabric Routing Extended Header in 10.3 http://www.t11.org/t11/docreg.nsf/ufile/06-085v3 – FC-Inter-Fabric Routing Complete definition of the protocols to initiate and manage IFRs is in progress but several pre-standard implementations are already being used http://www.t11.org/t11/docreg.nsf/ufile/07-051v0 This draft is subject to change – FC-SW-4 Overview and Processing in 12 Exchange Virtual Fabric Parameters SW ILS in 6.1.26 http://www.t11.org/t11/docreg.nsf/ufile/05-033v5 April 14 2009 CMG Canada Dr. Steve Guendert Understanding Protocol Intermix (PIM) 2009 Brocade Communications Systems, Inc. All Rights Reserved. 58

Back to top button