skip to main content
10.1145/2966884.2966915acmotherconferencesArticle/Chapter ViewAbstractPublication PageseurompiConference Proceedingsconference-collections
research-article
Public Access

MPI Sessions: Leveraging Runtime Infrastructure to Increase Scalability of Applications at Exascale

Published:25 September 2016Publication History

ABSTRACT

MPI includes all processes in MPI_COMM_WORLD; this is untenable for reasons of scale, resiliency, and overhead. This paper offers a new approach, extending MPI with a new concept called Sessions, which makes two key contributions: a tighter integration with the underlying runtime system; and a scalable route to communication groups. This is a fundamental change in how we organise and address MPI processes that removes well-known scalability barriers by no longer requiring the global communicator MPI_COMM_WORLD.

References

  1. MPI-2 Journal of Development. http://www.mpi-forum.org/docs/mpi-jd/mpi-20-jod.ps.Z.Google ScholarGoogle Scholar
  2. MPI Standard 3.1. http://www.mpi-forum.org/docs/docs.html.Google ScholarGoogle Scholar
  3. The OpenMP API Specification for Parallel Programming. http://openmp.org.Google ScholarGoogle Scholar
  4. Top 500 Supercomputing Sites, Top500. http://www.top500.org/.Google ScholarGoogle Scholar
  5. T. Adachi, N. Shida, K. Miura, S. Sumimoto, A. Uno, M. Kurokawa, F. Shoji, and M. Yokokawa. The Design of Ultra Scalable MPI Collective Communication on the K Computer. Computer Science-Research and Development, 28(2-3):147--155, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. P. Balaji, D. Buntinas, D. Goodell, W. Gropp, T. Hoefler, S. Kumar, E. Lusk, R. Thakur, and J. L. Traff. MPI on Millions of Cores. Parallel Processing Letters, 21(01):45--60, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  7. A. Beguelin, J. Dongarra, A. Geist, R. Manchek, and V. Sunderam. A Users' Guide to PVM (Parallel Virtual Machine). Technical report, Oak Ridge National Lab., TN (United States), 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. W. Bland, A. Bouteiller, T. Herault, J. Hursey, G. Bosilca, and J. J. Dongarra. Recent Advances in the Message Passing Interface: 19th European MPI Users' Group Meeting, EuroMPI 2012, Vienna, Austria, September 23-26, 2012. Proceedings, chapter An Evaluation of User-Level Failure Mitigation Support in MPI, pages 193--203. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Chaarawi and E. Gabriel. Evaluating Sparse Data Storage Techniques for MPI Groups and Communicators. In Computational Science--ICCS 2008, pages 297--306. Springer, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Dinan, R. E. Grant, P. Balaji, D. Goodell, D. Miller, M. Snir, and R. Thakur. Enabling Communication Concurrency Through Flexible MPI Endpoints. International Journal of High Performance Computing Applications, 28(4):390--405, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. M. Farreras, T. Cortes, J. Labarta, and G. Almasi. Scaling MPI to Short-Memory MPPs Such As BG/L. In Proceedings of the 20th annual international conference on Supercomputing, pages 209--218, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. E. Gabriel, G. E. Fagg, and J. J. Dongarra. Evaluating Dynamic Communicators and One-Sided Operations for Current MPI Libraries. International Journal of High Performance Computing Applications, 19(1):67--79, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. G. Geist, J. A. Kohl, and P. M. Papadopoulos. PVM and MPI: A Comparison of Features. Calculateurs Paralleles, 8(2):137--150, 1996.Google ScholarGoogle Scholar
  14. W. Gropp and E. Lusk. Recent Advances in Parallel Virtual Machine and Message Passing Interface: 4th European PVM/MPI Users' Group Meeting Cracow, Poland, November 3-5, 1997 Proceedings, chapter Why are PVM and MPI so different? 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. F. Gygi, R. K. Yates, J. Lorenz, E. W. Draeger, F. Franchetti, C. W. Ueberhuber, B. R. d. Supinski, S. Kral, J. A. Gunnels, and J. C. Sexton. Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGene/L Platform Using the QBOX code. In Proceedings of the 2005 ACM/IEEE conference on Supercomputing, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. D. Holmes and S. Booth. Mcmpi: A managed-code mpi library in pure c#. In Proceedings of the 20th European MPI Users' Group Meeting, EuroMPI '13, pages 25--30, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. H. Kamal, S. M. Mirtaheri, and A. Wagner. Scalability of Communicators and Groups in MPI. In Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, HPDC '10, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. H. Kamal and A. Wagner. FG-MPI: Fine-grain MPI for Multicore and Clusters. In Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010 IEEE International Symposium on, pages 1--8. IEEE, 2010.Google ScholarGoogle Scholar
  19. A. Moody, D. H. Ahn, and B. R. de Supinski. Exascale Algorithms for Generalized MPI comm split. In Proceedings of the 18th European MPI Users' Group Conference on Recent Advances in the Message Passing Interface, EuroMPI'11. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. Pérache, P. Carribault, and H. Jourdren. MPC-MPI: An MPI Implementation Reducing the Overall Memory Consumption. In Recent Advances in Parallel Virtual Machine and Message Passing Interface, volume 5759, pages 94--103. Springer, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. B. V. Protopopov and A. Skjellum. A Multithreaded Message Passing Interface (MPI) Architecture: Performance and Program Issues. Journal of Parallel and Distributed Computing, 61(4):449--466, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. S. Sridharan, J. Dinan, and D. D. Kalamkar. Enabling Efficient Multithreaded MPI Communication Through a Library-Based Implementation of MPI Endpoints. In High Performance Computing, Networking, Storage and Analysis, SC14: International Conference for, pages 487--498. IEEE, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    EuroMPI '16: Proceedings of the 23rd European MPI Users' Group Meeting
    September 2016
    225 pages
    ISBN:9781450342346
    DOI:10.1145/2966884

    Copyright © 2016 ACM

    Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 25 September 2016

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate66of139submissions,47%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader