ABSTRACT
MPI includes all processes in MPI_COMM_WORLD; this is untenable for reasons of scale, resiliency, and overhead. This paper offers a new approach, extending MPI with a new concept called Sessions, which makes two key contributions: a tighter integration with the underlying runtime system; and a scalable route to communication groups. This is a fundamental change in how we organise and address MPI processes that removes well-known scalability barriers by no longer requiring the global communicator MPI_COMM_WORLD.
- MPI-2 Journal of Development. http://www.mpi-forum.org/docs/mpi-jd/mpi-20-jod.ps.Z.Google Scholar
- MPI Standard 3.1. http://www.mpi-forum.org/docs/docs.html.Google Scholar
- The OpenMP API Specification for Parallel Programming. http://openmp.org.Google Scholar
- Top 500 Supercomputing Sites, Top500. http://www.top500.org/.Google Scholar
- T. Adachi, N. Shida, K. Miura, S. Sumimoto, A. Uno, M. Kurokawa, F. Shoji, and M. Yokokawa. The Design of Ultra Scalable MPI Collective Communication on the K Computer. Computer Science-Research and Development, 28(2-3):147--155, 2013. Google ScholarDigital Library
- P. Balaji, D. Buntinas, D. Goodell, W. Gropp, T. Hoefler, S. Kumar, E. Lusk, R. Thakur, and J. L. Traff. MPI on Millions of Cores. Parallel Processing Letters, 21(01):45--60, 2011.Google ScholarCross Ref
- A. Beguelin, J. Dongarra, A. Geist, R. Manchek, and V. Sunderam. A Users' Guide to PVM (Parallel Virtual Machine). Technical report, Oak Ridge National Lab., TN (United States), 1991. Google ScholarDigital Library
- W. Bland, A. Bouteiller, T. Herault, J. Hursey, G. Bosilca, and J. J. Dongarra. Recent Advances in the Message Passing Interface: 19th European MPI Users' Group Meeting, EuroMPI 2012, Vienna, Austria, September 23-26, 2012. Proceedings, chapter An Evaluation of User-Level Failure Mitigation Support in MPI, pages 193--203. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. Google ScholarDigital Library
- M. Chaarawi and E. Gabriel. Evaluating Sparse Data Storage Techniques for MPI Groups and Communicators. In Computational Science--ICCS 2008, pages 297--306. Springer, 2008. Google ScholarDigital Library
- J. Dinan, R. E. Grant, P. Balaji, D. Goodell, D. Miller, M. Snir, and R. Thakur. Enabling Communication Concurrency Through Flexible MPI Endpoints. International Journal of High Performance Computing Applications, 28(4):390--405, 2014. Google ScholarDigital Library
- M. Farreras, T. Cortes, J. Labarta, and G. Almasi. Scaling MPI to Short-Memory MPPs Such As BG/L. In Proceedings of the 20th annual international conference on Supercomputing, pages 209--218, 2006. Google ScholarDigital Library
- E. Gabriel, G. E. Fagg, and J. J. Dongarra. Evaluating Dynamic Communicators and One-Sided Operations for Current MPI Libraries. International Journal of High Performance Computing Applications, 19(1):67--79, 2005. Google ScholarDigital Library
- G. Geist, J. A. Kohl, and P. M. Papadopoulos. PVM and MPI: A Comparison of Features. Calculateurs Paralleles, 8(2):137--150, 1996.Google Scholar
- W. Gropp and E. Lusk. Recent Advances in Parallel Virtual Machine and Message Passing Interface: 4th European PVM/MPI Users' Group Meeting Cracow, Poland, November 3-5, 1997 Proceedings, chapter Why are PVM and MPI so different? 1997. Google ScholarDigital Library
- F. Gygi, R. K. Yates, J. Lorenz, E. W. Draeger, F. Franchetti, C. W. Ueberhuber, B. R. d. Supinski, S. Kral, J. A. Gunnels, and J. C. Sexton. Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGene/L Platform Using the QBOX code. In Proceedings of the 2005 ACM/IEEE conference on Supercomputing, 2005. Google ScholarDigital Library
- D. Holmes and S. Booth. Mcmpi: A managed-code mpi library in pure c#. In Proceedings of the 20th European MPI Users' Group Meeting, EuroMPI '13, pages 25--30, New York, NY, USA, 2013. ACM. Google ScholarDigital Library
- H. Kamal, S. M. Mirtaheri, and A. Wagner. Scalability of Communicators and Groups in MPI. In Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, HPDC '10, 2010. Google ScholarDigital Library
- H. Kamal and A. Wagner. FG-MPI: Fine-grain MPI for Multicore and Clusters. In Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010 IEEE International Symposium on, pages 1--8. IEEE, 2010.Google Scholar
- A. Moody, D. H. Ahn, and B. R. de Supinski. Exascale Algorithms for Generalized MPI comm split. In Proceedings of the 18th European MPI Users' Group Conference on Recent Advances in the Message Passing Interface, EuroMPI'11. 2011. Google ScholarDigital Library
- M. Pérache, P. Carribault, and H. Jourdren. MPC-MPI: An MPI Implementation Reducing the Overall Memory Consumption. In Recent Advances in Parallel Virtual Machine and Message Passing Interface, volume 5759, pages 94--103. Springer, 2009. Google ScholarDigital Library
- B. V. Protopopov and A. Skjellum. A Multithreaded Message Passing Interface (MPI) Architecture: Performance and Program Issues. Journal of Parallel and Distributed Computing, 61(4):449--466, 2001. Google ScholarDigital Library
- S. Sridharan, J. Dinan, and D. D. Kalamkar. Enabling Efficient Multithreaded MPI Communication Through a Library-Based Implementation of MPI Endpoints. In High Performance Computing, Networking, Storage and Analysis, SC14: International Conference for, pages 487--498. IEEE, 2014. Google ScholarDigital Library
Recommendations
Tools-supported HPF and MPI parallelization of the NAS parallel benchmarks
FRONTIERS '96: Proceedings of the 6th Symposium on the Frontiers of Massively Parallel ComputationHigh Performance Fortran (HPF) compilers and communication libraries with the standardized Message Passing Interface (MPI) are becoming widely available, easing the development of portable parallel applications. The Annai tool environment supports ...
Scalability of communicators and groups in MPI
HPDC '10: Proceedings of the 19th ACM International Symposium on High Performance Distributed ComputingAs the number of cores inside compute clusters continues to grow, the scalability of MPI (Message Passing Interface) is important to ensure that programs can continue to execute on an ever-increasing number of cores. One important scalability issue for ...
MPI as a Coordination Layer for Communicating HPF Tasks
MPIDC '96: Proceedings of the Second MPI Developers ConferenceAbstract: Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of ...
Comments