ABSTRACT
In the arena of cluster computing, MPI has emerged as the de facto standard for writing parallel applications. At the same time, introduction of high speed RDMA-enabled interconnects like InfiniBand, Myrinet, Quadrics, RDMA-enabled Ethernet has escalated the trends in cluster computing. Network APIs like uDAPL (user Direct Access Provider Library) are being proposed to provide a network-independent interface to different RDMA-enabled interconnects. Clusters with combination(s) of these interconnects are being deployed to leverage their unique features, and network failover in wake of transmission errors. In this paper, we design a network fault tolerant MPI using uDAPL interface, making this design portable for existing and upcoming interconnects. Our design provides failover to available paths, asynchronous recovery of the previous failed paths and recovery from network partitions without application restart. In addition, the design is able to handle network heterogeneity, making it suitable for the current state of the art clusters. We implement our design and evaluate it with micro-benchmarks and applications. Our performance evaluation shows that the proposed design provides significant performance benefits to both homogeneous and heterogeneous clusters. Using a heterogeneous combinations of IBA and Ammasso-GigE, we are able to improve the performance by 10-15% for different NAS Parallel Benchmarks on 8x1 configuration. For simple micro-benchmarks on a homogeneous configuration, we are able to achieve an improvement of 15-20% in throughput. In addition, experiments with simple MPI micro-benchmarks and NAS Applications reveal that network fault tolerance modules incur negligible overhead and provide optimal performance in wake of network partitions.
- Ammasso Incorporation. The Ammasso 1100 High Performance Ethernet Adapter User Guide. February 2005.Google Scholar
- D. H. Bailey, E. Barszcz, J. T. Barton, D. S. Browning, R. L. Carter, D. Dagum, R. A. Fatoohi, P. O. Frederickson, T. A. Lasinski, R. S. Schreiber, H. D. Simon, V. Venkatakrishnan, and S. K. Weeratunga. The NAS Parallel Benchmarks. volume 5, pages 63--73, Fall 1991.Google ScholarDigital Library
- Mohammad Banikazemi, Rama K. Govindaraju, Robert Blackmore, and Dhabaleswar K. Panda. MPILAPI: An Efficient Implementation of MPI for IBM RS/6000 SP Systems. IEEE Transactions on Parallel and Distributed Systems, pages 1081--1093, October 2001. Google ScholarDigital Library
- Darius Buntinas, Guillaume Mercier, and William Gropp. The Design and Evaluation of Nemesis, a Scalable Low-Latency Message-Passing Communication Subsystem. Number ANL/MCS-TM-292, 2005.Google Scholar
- L. Chai, R. Noronha, P. Gupta, G. Brown, and D. K. Panda. Designing a Portable MPI-2 over Modern Interconnects Using uDAPL Interface. In EuroPVM/MPI, 2005. Google ScholarDigital Library
- DAT Collaborative. uDAPL: User Direct Access Programming Library Version 1.2. http://www.datcollaborative.org/udapl.html, July 2004.Google Scholar
- Edgar Gabriel, Graham E. Fagg, George Bosilca, Thara Angskun, Jack Dongarra, Jeffrey M. Squyres, Vishal Sahay, Prabhanjan Kambadur, Brian Barrett, Andrew Lumsdaine, Ralph H. Castain, David J. Daniel, Richard L. Graham, and Timothy S. Woodall. Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation. In EuroPVM/MPI, pages 97--104, 2004.Google ScholarCross Ref
- Richard L. Graham, Sung-Eun Choi, David J. Daniel, Nehal N. Desai, Ronald G. Minnich, Craig E. Rasmussen, L. Dean Risinger, and Mitchel W. Sukalski. A Network-Failure-Tolerant Message-Passing System for Terascale Clusters. volume 31, pages 285--303, Norwell, MA, USA, 2003. Kluwer Academic Publishers. Google ScholarDigital Library
- InfiniBand Trade Association. InfiniBand Architecture Specification, Release 1.2. October 2004.Google Scholar
- Lawrence Livermore National Laboratory. MVICH: MPI for Virtual Interface Architecture, August 2001.Google Scholar
- J. Liu, A. Vishnu, and D. K. Panda. Building Multirail InfiniBand Clusters: MPI-Level Design and Performance Evaluation. In SuperComputing Conference, 2004. Google ScholarDigital Library
- J. Liu, J. Wu, S. P. Kini, P. Wyckoff, and D. K. Panda. High Performance RDMA-Based MPI Implementation over InfiniBand. In 17th Annual ACM International Conference on Supercomputing, June 2003. Google ScholarDigital Library
- Network-Based Computing Laboratory. MVAPICH/MVAPICH2: MPI-1/MPI-2 for InfiniBand on VAPI/Gen2 Layer. http://nowlab.cse.ohiostate.edu/projects/mpi-iba/index.html.Google Scholar
- OpenIB.org. http://www.openib.org/.Google Scholar
- Scott Pakin and Avneesh Pant. VMI 2.0: A Dynamically Reconfigurable Messaging Layer for Availability, Usability, and Management. In The 8th International Symposium on High Performance Computer Architecture (HPCA-8), Workshop on Novel Uses of System Area Networks (SAN-1), Cambridge, Massachusetts, February 2, 2002.Google Scholar
- A. Vishnu, G. Santhanaraman, W. Huang, H.-W. Jin, and D. K. Panda. Supporting MPI-2 One Sided Communication on Multi-Rail InfiniBand Clusters: Design Challenges and Performance Benefits. In International Conference on High Performance Computing, HiPC, 2005. Google ScholarDigital Library
Index Terms
- A software based approach for providing network fault tolerance in clusters with uDAPL interface: MPI level design and performance evaluation
Recommendations
Designing a portable MPI-2 over modern interconnects using uDAPL interface
PVM/MPI'05: Proceedings of the 12th European PVM/MPI users' group conference on Recent Advances in Parallel Virtual Machine and Message Passing InterfaceIn the high performance computing arena, there exist several implementations of MPI-1 and MPI-2 for different networks. Some implementations allow the developer to work with multiple networks. However, most of them require the implementation of a new ...
Algorithm-Based Fault Tolerance on a Hypercube Multiprocessor
The design of fault-tolerant hypercube multiprocessor architecture is discussed. The authors propose the detection and location of faulty processors concurrently with the actual execution of parallel applications on the hypercube using a novel scheme of ...
Fault Tolerance on NoCs
WAINA '13: Proceedings of the 2013 27th International Conference on Advanced Information Networking and Applications WorkshopsMulti-Processor Systems-on-Chip (MPSoCs) are increasingly popular in embedded systems, but also on high performance systems. In such systems, the data bandwidth requirements keeps increasing as the number of processing elements increases. Therefore, a ...
Comments