ABSTRACT
MPI is a popular programming paradigm on parallel machines today. MPI libraries sometimes use O(N) data structures to implement MPI functionality. The IBM Blue Gene/Q machine has 16 GB memory per node. If each node runs 32 MPI processes, only 512 MB is available per process, requiring the MPI library to be space efficient. This scenario will become severe in a future Exascale machine with tens of millions of cores and MPI endpoints. We explore techniques to compress the dense O(N) mapping data structures that map the logical process ID to the global rank. Our techniques minimize topological communicator mapping state by replacing table lookups with a mapping function. We also explore caching schemes with performance results to optimize overheads of the mapping functions for recent translations in multiple MPI micro-benchmarks, and the 3D FFT and Algebraic Multi Grid application benchmarks.
- D. Chen, N. A. Eisley, P. Heidelberger, R. M. Senger, Y. Sugawara, S. Kumar, V. Salapura, D. L. Satterfield, B. Steinmacher-Burow, and J. J. Parker. The IBM Blue Gene/Q interconnection network and message unit. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, pages 26:1--26:10. ACM, 2011. Google ScholarDigital Library
- W. Gropp, E. Lusk, N. Doss, and A. Skjellum. MPICH: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Parallel Computing, 22(6):789--828, September 1996. Google ScholarDigital Library
- MPICH2. http://www.mcs.anl.gov/mpi/mpich2Google Scholar
- W. Gropp and E. Lusk. MPICH ADI Implementation Reference Manual, August 1995.Google Scholar
- S. Kumar, A. Mamidala, D. Faraj, B. Smith, M. Blocksome, B. Cernohous, D. Miller, J. Parker, J. Ratterman, P. Heidelberger, D. Chen, and B. Steinmacher-Burow. PAMI: A parallel active message interface for the BlueGene/Q supercomputer. In Proceedings of 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS), Shanghai, China, May 2012. Google ScholarDigital Library
- K. D. Ryu, R. Bellofatto, M. A. Blocksome, T. Gooding, T. A. Inglett, S. Kumar, A. R. Mamidala, M. G. Megerian, S. Miller, M. T. Nelson, B. Rosenburg, K. D. Ryu, B. Smith, J. Van Oosten, A. Wang and R. W. Wisniewski, IBM Blue Gene/Q System software stack, IBM Journal of Research and Development, Volume 57 Issue 1, January 2013, Pages 55--65. Google ScholarDigital Library
- P. Balaji, D. Buntinas, D. Goodell, W. Gropp, T. Hoefler, S. Kumar, E. Lusk, R. Thakur, and J. L. Träff, MPI on Millions of Cores. Parallel Processing Letters, 21(1):45--60, March 2011.Google ScholarCross Ref
- Stevens, R., White, A.: Report of the workshop on architectures and technologies for extreme scale computing. http://extremecomputing.labworks.org/hardware/report.stm (December 2009)Google Scholar
- IBM Blue Gene/P Team, Overview of the IBM Blue Gene/P project, IBM Journal of Research and Development, Volume 52, No. 1/2, 2008 Google ScholarDigital Library
- ASC Sequoia Benchmark Codes: Phloem. https://asc.llnl.gov/sequoia/benchmarks/#phloem (May 2011)Google Scholar
- David Goodell, William Gropp, Xin Zhao and Rajeev Thakur, Scalable Memory Use in MPI: A Case Study with MPICH2, Recent Advances in the Message Passing Interface Lecture Notes in Computer Science Volume 6960, 2011, pp 140--149 Google ScholarDigital Library
- Buntinas, Darius, Guillaume Mercier, and William Gropp. "Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem." Cluster Computing and the Grid, 2006. CCGRID 06. Sixth IEEE International Symposium on. Vol. 1. IEEE, 2006. Google ScholarDigital Library
- MVAPICH: MPI over InfiniBand, 10GigE/iWARP and RoCE, http://mvapich.cse.ohio-state.eduGoogle Scholar
- J.A. Ang, R.F. Barrett, R.E. Benner, D. Burke, C. Chan, D. Donofrio, S.D. Hammond, K.S. Hemmert, S.M. Kelly, H. Le, V.J. Leung, D.R. Resnick, A.F. Rodrigues, J. Shalf, D. Stark, D. Unat and N.J. Wright, Abstract Machine Models and Proxy Architectures for Exascale Computing, http://www.cal-design.org/publications/publications2Google Scholar
- A. Petitet, C.Whaley, J. Dongarra, and A. Cleary, HPL: A Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers. Innovative Computing Laboratory, 2000. Available at http://icl.cs.utk.edu/hpl.Google Scholar
- Jesper Larsson Traff, "Compact and Efficient Implementation of the MPI Group Operations", in proceedings of recent advances in the message passing interface, EuroMPI', volume 6305 pp 170--178. Google ScholarDigital Library
- M. Charawi and E. Gabriel, Evaluating Sparse Data Storage Techniques for MPI Groups and Communicators, In Proceedings of the International Conference on Computational Science, Krakow Poland, 2008, pp 297--308. Google ScholarDigital Library
- Sayantan Sur, Matthew J. Koop, and Dhabaleswar K. Panda. High-performance and scalable MPI over In_niBand with reduced memory usage: an in-depth performance analysis. In Proceedings of the 2006 ACM/IEEE conference on Supercomputing, page 105, New York, NY, USA, 2006. ACM. Google ScholarDigital Library
- Sequoia Algebraic Multi Grid (AMG) Benchmark https://asc.llnl.gov/sequoia/benchmarks/#amgGoogle Scholar
Recommendations
Performance Evaluation of MPI Implementations and MPI-Based Parallel ELLPACK Solvers
MPIDC '96: Proceedings of the Second MPI Developers ConferenceAbstract: We are concerned with the parallelization of finite element mesh generation and its decomposition, and the parallel solution of sparse algebraic equations which are obtained from the parallel discretization of second order elliptic partial ...
LogGP Performance Evaluation of MPI
HPDC '98: Proceedings of the 7th IEEE International Symposium on High Performance Distributed ComputingUsers of parallel machines need good performance evaluations for several communication patterns in order to develop efficient message-passing applications. LogGP is a simple parallel machine model that reflects the important parameters required to ...
Performance Modeling and Evaluation of MPI
Users of parallel machines need to have a good grasp for how different communication patterns and styles affect the performance of message-passing applications. LogGP is a simple performance model that reflects the most important parameters required to ...
Comments