skip to main content
10.1145/2966884.2966911acmotherconferencesArticle/Chapter ViewAbstractPublication PageseurompiConference Proceedingsconference-collections
research-article

Space Performance Tradeoffs in Compressing MPI Group Data Structures

Authors Info & Claims
Published:25 September 2016Publication History

ABSTRACT

MPI is a popular programming paradigm on parallel machines today. MPI libraries sometimes use O(N) data structures to implement MPI functionality. The IBM Blue Gene/Q machine has 16 GB memory per node. If each node runs 32 MPI processes, only 512 MB is available per process, requiring the MPI library to be space efficient. This scenario will become severe in a future Exascale machine with tens of millions of cores and MPI endpoints. We explore techniques to compress the dense O(N) mapping data structures that map the logical process ID to the global rank. Our techniques minimize topological communicator mapping state by replacing table lookups with a mapping function. We also explore caching schemes with performance results to optimize overheads of the mapping functions for recent translations in multiple MPI micro-benchmarks, and the 3D FFT and Algebraic Multi Grid application benchmarks.

References

  1. D. Chen, N. A. Eisley, P. Heidelberger, R. M. Senger, Y. Sugawara, S. Kumar, V. Salapura, D. L. Satterfield, B. Steinmacher-Burow, and J. J. Parker. The IBM Blue Gene/Q interconnection network and message unit. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, pages 26:1--26:10. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. W. Gropp, E. Lusk, N. Doss, and A. Skjellum. MPICH: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Parallel Computing, 22(6):789--828, September 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. MPICH2. http://www.mcs.anl.gov/mpi/mpich2Google ScholarGoogle Scholar
  4. W. Gropp and E. Lusk. MPICH ADI Implementation Reference Manual, August 1995.Google ScholarGoogle Scholar
  5. S. Kumar, A. Mamidala, D. Faraj, B. Smith, M. Blocksome, B. Cernohous, D. Miller, J. Parker, J. Ratterman, P. Heidelberger, D. Chen, and B. Steinmacher-Burow. PAMI: A parallel active message interface for the BlueGene/Q supercomputer. In Proceedings of 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS), Shanghai, China, May 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. K. D. Ryu, R. Bellofatto, M. A. Blocksome, T. Gooding, T. A. Inglett, S. Kumar, A. R. Mamidala, M. G. Megerian, S. Miller, M. T. Nelson, B. Rosenburg, K. D. Ryu, B. Smith, J. Van Oosten, A. Wang and R. W. Wisniewski, IBM Blue Gene/Q System software stack, IBM Journal of Research and Development, Volume 57 Issue 1, January 2013, Pages 55--65. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. Balaji, D. Buntinas, D. Goodell, W. Gropp, T. Hoefler, S. Kumar, E. Lusk, R. Thakur, and J. L. Träff, MPI on Millions of Cores. Parallel Processing Letters, 21(1):45--60, March 2011.Google ScholarGoogle ScholarCross RefCross Ref
  8. Stevens, R., White, A.: Report of the workshop on architectures and technologies for extreme scale computing. http://extremecomputing.labworks.org/hardware/report.stm (December 2009)Google ScholarGoogle Scholar
  9. IBM Blue Gene/P Team, Overview of the IBM Blue Gene/P project, IBM Journal of Research and Development, Volume 52, No. 1/2, 2008 Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. ASC Sequoia Benchmark Codes: Phloem. https://asc.llnl.gov/sequoia/benchmarks/#phloem (May 2011)Google ScholarGoogle Scholar
  11. David Goodell, William Gropp, Xin Zhao and Rajeev Thakur, Scalable Memory Use in MPI: A Case Study with MPICH2, Recent Advances in the Message Passing Interface Lecture Notes in Computer Science Volume 6960, 2011, pp 140--149 Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Buntinas, Darius, Guillaume Mercier, and William Gropp. "Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem." Cluster Computing and the Grid, 2006. CCGRID 06. Sixth IEEE International Symposium on. Vol. 1. IEEE, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. MVAPICH: MPI over InfiniBand, 10GigE/iWARP and RoCE, http://mvapich.cse.ohio-state.eduGoogle ScholarGoogle Scholar
  14. J.A. Ang, R.F. Barrett, R.E. Benner, D. Burke, C. Chan, D. Donofrio, S.D. Hammond, K.S. Hemmert, S.M. Kelly, H. Le, V.J. Leung, D.R. Resnick, A.F. Rodrigues, J. Shalf, D. Stark, D. Unat and N.J. Wright, Abstract Machine Models and Proxy Architectures for Exascale Computing, http://www.cal-design.org/publications/publications2Google ScholarGoogle Scholar
  15. A. Petitet, C.Whaley, J. Dongarra, and A. Cleary, HPL: A Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers. Innovative Computing Laboratory, 2000. Available at http://icl.cs.utk.edu/hpl.Google ScholarGoogle Scholar
  16. Jesper Larsson Traff, "Compact and Efficient Implementation of the MPI Group Operations", in proceedings of recent advances in the message passing interface, EuroMPI', volume 6305 pp 170--178. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. Charawi and E. Gabriel, Evaluating Sparse Data Storage Techniques for MPI Groups and Communicators, In Proceedings of the International Conference on Computational Science, Krakow Poland, 2008, pp 297--308. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Sayantan Sur, Matthew J. Koop, and Dhabaleswar K. Panda. High-performance and scalable MPI over In_niBand with reduced memory usage: an in-depth performance analysis. In Proceedings of the 2006 ACM/IEEE conference on Supercomputing, page 105, New York, NY, USA, 2006. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Sequoia Algebraic Multi Grid (AMG) Benchmark https://asc.llnl.gov/sequoia/benchmarks/#amgGoogle ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    EuroMPI '16: Proceedings of the 23rd European MPI Users' Group Meeting
    September 2016
    225 pages
    ISBN:9781450342346
    DOI:10.1145/2966884

    Copyright © 2016 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 25 September 2016

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate66of139submissions,47%
  • Article Metrics

    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)1

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader