N. Mohan Ram February 2006 C-DAC / Mohan Ram N Chief Investigator

38 Slides9.47 MB

N. Mohan Ram February 2006 C-DAC / Mohan Ram N Chief Investigator – GARUDA National Grid Computing Initiative - Garuda GARUDA National Grid Computing Initiative 9th February 2006 Kolkatta 1

Presentation Outline February 2006 C-DAC / Mohan Ram N Overview Technologies and Research Initiatives Communication Fabric Resources Partners Applications National Grid Computing Initiative - Garuda 2

Project Overview Precursor to the National Grid Computing Initiative – – Major Deliverables – – – February 2006 C-DAC / Mohan Ram N Test Bed for the grid technology/concepts and applications Provide inputs for the main grid proposal – – Technologies, Architectures, Standards & Research Initiatives Nation-wide high-speed communication fabric Aggregation of Grid Resources Deployment of Select applications of National Importance Grid Strategic User Group National Grid Computing Initiative - Garuda 3

February 2006 C-DAC / Mohan Ram N National Grid Computing Initiative - Garuda Technologies, Architectures, Standards and Research Initiatives 4

Deliverables Technologies – – – – – – – – – February 2006 C-DAC / Mohan Ram N – Garuda Component Architecture & Deployment Access Portal Problem Solving Environments Collaborative Environments Program Development Environments Management and Monitoring Middleware and Security Resource Management and Scheduling Data Management Clustering Technologies Research Initiatives – – – – – Integrated Development Environments Resource Brokers & Meta Schedulers Mobile Agent Framework Semantic Grid Services (MIT Chennai) Network Simulation National Grid Computing Initiative - Garuda 5

National Grid Computing Initiative - Garuda 6 February 2006 C-DAC / Mohan Ram N Garuda Component Architecture

GARUDA Components Benchmarks & Applications Collaborative Environment February 2006 C-DAC / Mohan Ram N IDE IDE Workflow Workflow Access AccessGRID GRID Profilers Profilers Video Video Conferencing Conferencing over over IP IP Storage StorageResource Resource Broker Broker Visualization Visualization Software Software GLOBUS 2.x/4.x Semantic SemanticGrid Grid Services Services Integration Integration && Engineering Engineering Loadleveler Loadleveler Cactus Cactus MDS MDS MPICH-G2 MPICH-G2 Grid Grid Schedulers Schedulers SUN SUNGrid Grid Engine Engine C-DAC Development & Deployment Research Initiatives Commercial Resource Resource Broker Broker Certificate Certificate Authority Authority Grid Grid Security Security Ganglia Ganglia Middleware & Security DIViA DIViA for for Grid Grid NMS NMS Storage & Visualization Grid Grid Probes Probes Grid Grid Applications Applications Monitoring & Management PDE Problem ProblemSolving Solving Environments Environments National Grid Computing Initiative - Garuda C-DAC C-DAC Grid Grid Portal Portal C-DAC GridMon GridMon C-DAC Grid Access Methods Collaborations Open Source 7

Bangalore Users Garuda Access Portal Other Users End Users access the grid through the Garuda Access Portal Bangalore Pune Resource Manager for Grids Resource Manager for Grids High Availability February 2006 C-DAC / Mohan Ram N Pune Users High Availability High Availability Shared User Space Shared Data Space AIX Cluster Chennai Hyderabad Resource Manager for Grids Resource Manager for Grids Linux Cluster High Availability Shared User Space Shared Data Space Solaris Cluster Linux Cluster National Grid Computing Initiative - Garuda Garuda Resource Deployment (at C-DAC centres) Linux Cluster 8

Garuda Access Portal February 2006 C-DAC / Mohan Ram N Addresses the usability challenges of the Grid Supports submission of parallel and sequential jobs Support for Accounting Integration with Grid Scheduler under progress National Grid Computing Initiative - Garuda 9

Collaborative Environments February 2006 C-DAC / Mohan Ram N Enable Collaborative environment for the Grid developers, users/partners. Will facilitate development team meetings and collaborative project design/progress reviews IP based video conferencing over the high speed communication Fabric Initial Target : Enable all centres of C-DAC participating in the Garuda development & deployment to collaborate through video conferencing Also exploring Access Grid environment National Grid Computing Initiative - Garuda 10

Enable users to carry out entire program development life cycle for the Grid DIViA for the Grid Features – – February 2006 C-DAC / Mohan Ram N – – Supports MPICH-G2 debugging Communication and computational statistics in different graphical formats Identification of potential bottlenecks Unique method of tracing, leads to enhanced information with reduced log file size Debugger in Design Phase National Grid Computing Initiative - Garuda Program Development Environment 11

Management and Monitoring February 2006 C-DAC / Mohan Ram N Monitors status & utilization of the Grid components : compute, network, softwares etc. Used by System Administrators and end users Being deployed at the Grid Monitoring and Management Centre(GMMC) User friendly interface National Grid Computing Initiative - Garuda 12

Middleware & Security Deployed using Globus Toolkit, Commercial and C-DAC developed components – – Resource Management and Scheduling – – Moab from Cluster Resources for Grid Scheduling Local Scheduling using Load Leveler for AIX Clusters and Torque for Solaris and Linux Clusters Data Management – February 2006 C-DAC / Mohan Ram N GT2 for operational requirements GT4 for research projects Storage Resource Broker from Nirvana for Data Grid functionalities National Grid Computing Initiative - Garuda 13

Grid Scheduler from Cluster Resources – – – Industry Leading Scheduler Components include Moab Workload Manager, Moab Grid Scheduler and Moab Cluster Manager Integrates with Globus – – – – Grid Scheduler Features – – February 2006 C-DAC / Mohan Ram N – – – Data Management through GASS and GridFTP Job staging with GRAM/Gatekeeper services User management through Globus user mapping files Security through X509-based client authentication Intelligent Data Staging Co-Allocation & Multi-Sourcing Service Monitoring and Management Sovereignty (Local vs. Central Management Policies) Virtual Private Cluster and Virtual Private Grid National Grid Computing Initiative - Garuda Resource Management and Scheduling Local Resource Managers – – Load Leveler on AIX Torque on Solaris/Linux clusters 14

February 2006 C-DAC / Mohan Ram N Grid Resource Manager Interacts with Grid FTP to Stage Data to each of the Clusters Grid FTP Globus Grid Resource Manager Leverages the Security and Access Control provided in Globus User Space 1 User Space 2 National Grid Computing Initiative - Garuda Administrator: Sets policies and manages via Cluster Manager for his or her own cluster and via Grid Resource Manager for Grid policies Grid Resource Manager Wide Area Grid User Space N End Users: (In Multiple User Spaces) Submit jobs via Garuda Grid Access Portal 15

Local Area Grid (C-DAC Bangalore) Single User Space End Users: (In a Single User Space) Submit jobs via web form Interface Garuda Access Portal Moab Cluster Manager Moab Cluster Manager: Acts as the Interface, using wizards and forms to improve ease of use and to unify interface to Workload and Resource Managers February 2006 C-DAC / Mohan Ram N Load Leveler AIX Cluster Head Node OS and Communication . Moab Workload Manager Torque Solaris Cluster Head Node OS and Communication . Unified Data Space Moab Workload Manager: Enforces policies, monitors workload and controls submissions through resource manager Torque Linux Cluster Head Node National Grid Computing Initiative - Garuda Administrators: Sets policies and manages via Moab Cluster Manager OS and Communication . 16

Data Management Enable data-oriented applications via an integrated but distributed storage and data management infrastructure Requirements – – – – – Heterogeneous Data Access across Multiple Locations Data Security Reliability and Consistency of Data Support for Unified Namespace and Multiple File Systems Optimal turn-around for Data Access – – February 2006 C-DAC / Mohan Ram N – – – Parallel I/O Bulk Operations Intelligent Resource Selection and Data Routing Latency Minimization Vertical and Horizontal Scalability National Grid Computing Initiative - Garuda Garuda Data Grid – Storage Resource Broker from Nirvana 17

Clustering Technologies Software – – – – – – Hardware – – February 2006 C-DAC / Mohan Ram N High Performance Compilers Message Passing Libraries Performance and Debugging Tools I/O Libraries, Parallel File System Cluster Management Software Available for AIX, Solaris and Linux Clusters 5Gbps SAN Technologies completed Reconfigurable Computing Systems for bioinformatics & cryptanalysis under progress National Grid Computing Initiative - Garuda 18

Research Initiatives Resource Broker – – – Grid IDE – – – – – – February 2006 C-DAC / Mohan Ram N Publishing Grid Services Intelligent discovery of Grid services Integration with Garuda Portal Mobile Agent Framework – – Writing and enabling applications to exploit the Grid Compiling/cross-compiling across different platforms Seamless integration of complex functionalities Support for multiple programming interfaces Semantic Grid Services (MIT, Chennai) – Standards are yet to be formulated Match the user requirements with the available resources Address co-allocation of computation and communication Forecasting the availability of resources Monitoring of resources in the Grid Grid software deployment and maintenance Network Simulation – – National Grid Computing Initiative - Garuda – Inputs for the next phase fabric architecture To study impact of changes in traffic profile on the performance 19

20 February 2006 C-DAC / Mohan Ram N National Grid Computing Initiative - Garuda Garuda Communication Fabric

Objectives & Deliverables Objective – – – Deliverables – – February 2006 C-DAC / Mohan Ram N Provide an ultra-high speed multi services communication fabric connecting user organizations across 17 cities in the country Provide seamless & high speed access to the compute, data & other resources on the Grid In Collaboration with ERNET – High-speed Communication Fabric connecting 17 cities Grid Management & Monitoring Centre IP based Collaborative Environment among select centres National Grid Computing Initiative - Garuda 21

National Grid Computing Initiative - Garuda 22 February 2006 C-DAC / Mohan Ram N Fabric Connectivity

Features February 2006 C-DAC / Mohan Ram N Ethernet based High Bandwidth capacity Scalable over entire geographic area High levels of reliability Fault tolerance and redundancy Interference resilience High security Effective Network Management National Grid Computing Initiative - Garuda 23

February 2006 C-DAC / Mohan Ram N To provide an integrated Grid Resource Management & Monitoring Framework Network Traffic Analysis and Congestion Management Change and Configuration Management National Grid Computing Initiative - Garuda Grid Management & Monitoring Centre(GMMC) 24

25 February 2006 C-DAC / Mohan Ram N National Grid Computing Initiative - Garuda Grid Resources

Objective and Deliverables February 2006 C-DAC / Mohan Ram N Objective – Provide heterogeneous resources in the Grid including Compute, Data, Software and Scientific Instruments – Deploy Test facilitates for Grid related research and development activities Deliverables – Grid enablement of C-DAC resources at Bangalore and Pune – Aggregation of Partner Resources – Setting up of PoC Test Bed and Grid Labs at Bangalore, Pune, Hyderabad and Chennai National Grid Computing Initiative - Garuda 26

Resources HPC Clusters & Storage from C-DAC – Bangalore : 128 CPU AIX Cluster,5 TB Storage – Pune : 64 CPU Solaris Cluster February 2006 C-DAC / Mohan Ram N : 16 CPU Linux Cluster, 4 TB Storage – Chennai : 16 CPU Linux Cluster, 2 TB Storage – Hyderabad : 16 CPU Linux Cluster, 2 TB Storage – The proposed 5 TF system to be part of the Grid Satellite Terminals from SAC Ahmedabad 2 TF Computing Cycles from IGIB Delhi 32 way SMP from Univ. of Hyderabad National Grid Computing Initiative - Garuda 64 CPU cluster from MIT, Chennai 64 CPU cluster from PRL, Ahmedabad 27

28 February 2006 C-DAC / Mohan Ram N National Grid Computing Initiative - Garuda Grid Partners

Motivation and Status Motivation – – Current Status – – February 2006 C-DAC / Mohan Ram N Setup a User Group to Collaborate on Research and Engineering of Technologies, Architectures, Standards and Applications in HPC and Grid Computing To Contribute to the aggregation of resources in the Grid – – 37 research & academic institutions in the 17 cities have agreed in principle to participate ERNET-HQ in Delhi 7 centres of C-DAC Total of 45 institutions National Grid Computing Initiative - Garuda 29

Partner Participation February 2006 C-DAC / Mohan Ram N National Grid Computing Initiative - Garuda Institute of Plasma Research, Ahmedabad Physical Research Laboratory, Ahmedabad Space Applications Centre, Ahmedabad Harish Chandra Research Institute, Allahabad Motilal Nehru National Institute of Technology, Allahabad Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore Indian Institute of Astrophysics, Bangalore Indian Institute of Science, Bangalore Institute of Microbial Technology, Chandigarh Punjab Engineering College, Chandigarh Madras Institute of Technology, Chennai Indian Institute of Technology, Chennai Institute of Mathematical Sciences, Chennai 30

Partner Participation (Contd.) February 2006 C-DAC / Mohan Ram N Indian Institute of Technology, Delhi Jawaharlal Nehru University, Delhi Institute for Genomics and Integrative Biology, Delhi Indian Institute of Technology, Guwahati Guwahati University, Guwahati University of Hyderabad, Hyderabad Centre for DNA Fingerprinting and Diagnostics, Hyderabad Jawaharlal Nehru Technological University, Hyderabad Indian Institute of Technology, Kanpur Indian Institute of Technology, Kharagpur Saha Institute of Nuclear Physics, Kolkatta Central Drug Research Institute, Lucknow Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow National Grid Computing Initiative - Garuda 31

Partner Participation (Contd.) February 2006 C-DAC / Mohan Ram N National Grid Computing Initiative - Garuda Bhabha Atomic Research Centre, Mumbai Indian Institute of Technology, Mumbai Tata Institute of Fundamental Research, Mumbai IUCCA, Pune National Centre for Radio Astrophysics, Pune National Chemical Laboratory, Pune Pune University, Pune Indian Institute of Technology, Roorkee Regional Cancer Centre, Thiruvananthapuram Vikram Sarabhai Space Centre, Thiruvananthapuram Institute of Technology, Banaras Hindu University, Varanasi 32

33 February 2006 C-DAC / Mohan Ram N National Grid Computing Initiative - Garuda Applications of Importance for PoC Garuda

Objectives and Deliverables Objectives – Deliverables – February 2006 C-DAC / Mohan Ram N Enable applications of national importance requiring aggregation of geographically distributed resources Grid enablement of illustrative applications and some demonstrations such as – – Bioinformatics Disaster Management National Grid Computing Initiative - Garuda 34

Bioinformatics Resources & Applications Facility (BRAF) on PARAM Padma Supports highly optimized Bioinformatics codes on the PARAM Padma Web computing portal providing all computational facility to solve related problems National Grid Computing Initiative - Garuda February 2006 C-DAC / Mohan Ram N Bioinformatics 35

Disaster Management Flight data transmission from nearby Airport User February 2006 C-DAC / Mohan Ram N Agenci es GRID Communication Fabric High Speed Communicati on PARAM Padma at Bangalore at Pune User Agenci es National Grid Computing Initiative - Garuda Grid Partner Resource 36

Disaster Management (contd.) Requirements – – Challenges – – February 2006 C-DAC / Mohan Ram N Timely dissemination of disaster information to user agencies Organize logistics around automated and secure work flow and data base Widely spread application resources and types : disaster sensors, compute, application experts Turn around time for the work flow National Grid Computing Initiative - Garuda 37

38 February 2006 C-DAC / Mohan Ram N National Grid Computing Initiative - Garuda Thank you!

Back to top button