skip to main content
10.1145/1188455.1188545acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
Article

A software based approach for providing network fault tolerance in clusters with uDAPL interface: MPI level design and performance evaluation

Published:11 November 2006Publication History

ABSTRACT

In the arena of cluster computing, MPI has emerged as the de facto standard for writing parallel applications. At the same time, introduction of high speed RDMA-enabled interconnects like InfiniBand, Myrinet, Quadrics, RDMA-enabled Ethernet has escalated the trends in cluster computing. Network APIs like uDAPL (user Direct Access Provider Library) are being proposed to provide a network-independent interface to different RDMA-enabled interconnects. Clusters with combination(s) of these interconnects are being deployed to leverage their unique features, and network failover in wake of transmission errors. In this paper, we design a network fault tolerant MPI using uDAPL interface, making this design portable for existing and upcoming interconnects. Our design provides failover to available paths, asynchronous recovery of the previous failed paths and recovery from network partitions without application restart. In addition, the design is able to handle network heterogeneity, making it suitable for the current state of the art clusters. We implement our design and evaluate it with micro-benchmarks and applications. Our performance evaluation shows that the proposed design provides significant performance benefits to both homogeneous and heterogeneous clusters. Using a heterogeneous combinations of IBA and Ammasso-GigE, we are able to improve the performance by 10-15% for different NAS Parallel Benchmarks on 8x1 configuration. For simple micro-benchmarks on a homogeneous configuration, we are able to achieve an improvement of 15-20% in throughput. In addition, experiments with simple MPI micro-benchmarks and NAS Applications reveal that network fault tolerance modules incur negligible overhead and provide optimal performance in wake of network partitions.

References

  1. Ammasso Incorporation. The Ammasso 1100 High Performance Ethernet Adapter User Guide. February 2005.Google ScholarGoogle Scholar
  2. D. H. Bailey, E. Barszcz, J. T. Barton, D. S. Browning, R. L. Carter, D. Dagum, R. A. Fatoohi, P. O. Frederickson, T. A. Lasinski, R. S. Schreiber, H. D. Simon, V. Venkatakrishnan, and S. K. Weeratunga. The NAS Parallel Benchmarks. volume 5, pages 63--73, Fall 1991.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Mohammad Banikazemi, Rama K. Govindaraju, Robert Blackmore, and Dhabaleswar K. Panda. MPILAPI: An Efficient Implementation of MPI for IBM RS/6000 SP Systems. IEEE Transactions on Parallel and Distributed Systems, pages 1081--1093, October 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Darius Buntinas, Guillaume Mercier, and William Gropp. The Design and Evaluation of Nemesis, a Scalable Low-Latency Message-Passing Communication Subsystem. Number ANL/MCS-TM-292, 2005.Google ScholarGoogle Scholar
  5. L. Chai, R. Noronha, P. Gupta, G. Brown, and D. K. Panda. Designing a Portable MPI-2 over Modern Interconnects Using uDAPL Interface. In EuroPVM/MPI, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. DAT Collaborative. uDAPL: User Direct Access Programming Library Version 1.2. http://www.datcollaborative.org/udapl.html, July 2004.Google ScholarGoogle Scholar
  7. Edgar Gabriel, Graham E. Fagg, George Bosilca, Thara Angskun, Jack Dongarra, Jeffrey M. Squyres, Vishal Sahay, Prabhanjan Kambadur, Brian Barrett, Andrew Lumsdaine, Ralph H. Castain, David J. Daniel, Richard L. Graham, and Timothy S. Woodall. Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation. In EuroPVM/MPI, pages 97--104, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  8. Richard L. Graham, Sung-Eun Choi, David J. Daniel, Nehal N. Desai, Ronald G. Minnich, Craig E. Rasmussen, L. Dean Risinger, and Mitchel W. Sukalski. A Network-Failure-Tolerant Message-Passing System for Terascale Clusters. volume 31, pages 285--303, Norwell, MA, USA, 2003. Kluwer Academic Publishers. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. InfiniBand Trade Association. InfiniBand Architecture Specification, Release 1.2. October 2004.Google ScholarGoogle Scholar
  10. Lawrence Livermore National Laboratory. MVICH: MPI for Virtual Interface Architecture, August 2001.Google ScholarGoogle Scholar
  11. J. Liu, A. Vishnu, and D. K. Panda. Building Multirail InfiniBand Clusters: MPI-Level Design and Performance Evaluation. In SuperComputing Conference, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Liu, J. Wu, S. P. Kini, P. Wyckoff, and D. K. Panda. High Performance RDMA-Based MPI Implementation over InfiniBand. In 17th Annual ACM International Conference on Supercomputing, June 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Network-Based Computing Laboratory. MVAPICH/MVAPICH2: MPI-1/MPI-2 for InfiniBand on VAPI/Gen2 Layer. http://nowlab.cse.ohiostate.edu/projects/mpi-iba/index.html.Google ScholarGoogle Scholar
  14. OpenIB.org. http://www.openib.org/.Google ScholarGoogle Scholar
  15. Scott Pakin and Avneesh Pant. VMI 2.0: A Dynamically Reconfigurable Messaging Layer for Availability, Usability, and Management. In The 8th International Symposium on High Performance Computer Architecture (HPCA-8), Workshop on Novel Uses of System Area Networks (SAN-1), Cambridge, Massachusetts, February 2, 2002.Google ScholarGoogle Scholar
  16. A. Vishnu, G. Santhanaraman, W. Huang, H.-W. Jin, and D. K. Panda. Supporting MPI-2 One Sided Communication on Multi-Rail InfiniBand Clusters: Design Challenges and Performance Benefits. In International Conference on High Performance Computing, HiPC, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A software based approach for providing network fault tolerance in clusters with uDAPL interface: MPI level design and performance evaluation

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Conferences
                SC '06: Proceedings of the 2006 ACM/IEEE conference on Supercomputing
                November 2006
                746 pages
                ISBN:0769527000
                DOI:10.1145/1188455

                Copyright © 2006 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 11 November 2006

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • Article

                Acceptance Rates

                SC '06 Paper Acceptance Rate54of239submissions,23%Overall Acceptance Rate1,516of6,373submissions,24%

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader

              HTML Format

              View this article in HTML Format .

              View HTML Format