skip to main content
10.1145/3219104.3219134acmotherconferencesArticle/Chapter ViewAbstractPublication PagespearcConference Proceedingsconference-collections
research-article
Public Access

What You Should Know About NAMD and Charm++ But Were Hoping to Ignore

Authors Info & Claims
Published:22 July 2018Publication History

ABSTRACT

The biomolecular simulation program NAMD is used heavily at many HPC centers. Supporting NAMD users requires knowledge of the Charm++ parallel runtime system on which NAMD is built. Introduced in 1993, Charm++ supports message-driven, task-based, and other programming models and has demonstrated its portability across generations of architectures, interconnects, and operating systems. While Charm++ can use MPI as a portable communication layer, specialized high-performance layers are preferred for Cray, IBM, and InfiniBand networks and a new OFI layer supports Omni-Path. NAMD binaries using some specialized layers can be launched directly with mpiexec or its equivalent, or mpiexec can be called by the charmrun program to leverage system job-launch mechanisms. Charm++ supports multi-threaded parallelism within each process, with a single thread dedicated to communication and the rest for computation. The optimal balance between thread and process parallelism depends on the size of the simulation, features used, memory limitations, nodes count, and the core count and NUMA structure of each node. It is also important to enable the Charm++ built-in CPU affinity settings to bind worker and communication threads appropriately to processor cores. Appropriate execution configuration and CPU affinity settings are particularly non-intuitive on Intel KNL processors due to their high core counts and flat NUMA hierarchy. Rules and heuristics for default settings can provide good default performance in most cases and dramatically reduce the search space when optimizing for a specific simulation on particular machine. Upcoming Charm++ and NAMD releases will simplify and automate launch configuration and affinity settings.

References

  1. Bilge Acun, Abhishek Gupta, Nikhil Jain, Akhil Langer, Harshitha Menon, Eric Mikida, Xiang Ni, Michael Robson, Yanhua Sun, Ehsan Totoni, Lukasz Wesolowski, and Laxmikant Kale. 2014. Parallel Programming with Migratable Objects: Charm++ in Practice (SC). Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Wei Jiang, James Phillips, Lei Huang, Mikolai Fajer, Yilin Meng, James Gumbart, Yun Luo, Klaus Schulten, and Benoit Roux. 2014. Generalized Scalable Multiple Copy Algorithms for Molecular Dynamics Simulations in NAMD. Comput. Phys. Commun. 185 (2014), 908--916.Google ScholarGoogle ScholarCross RefCross Ref
  3. Laxmikant Kale, Anshu Arya, Nikhil Jain, Akhil Langer, Jonathan Lifflander, Harshitha Menon, Xiang Ni, Yanhua Sun, Ehsan Totoni, Ramprasad Venkataraman, and Lukasz Wesolowski. 2012. Migratable Objects + Active Messages + Adaptive Runtime = Productivity + Performance A Submission to 2012 HPC Class II Challenge. Technical Report 12--47. Parallel Programming Laboratory.Google ScholarGoogle Scholar
  4. Chao Mei, Yanhua Sun, Gengbin Zheng, Eric J. Bohm, Laxmikant V. Kalé, James C. Phillips, and Chris Harrison. 2011. Enabling and Scaling Biomolecular Simulations of 100 Million Atoms on Petascale Machines with a Multicore-optimized Message-driven Runtime. In Proceedings of the 2011 ACM/IEEE conference on Supercomputing. Seattle, WA, 61:1--61:11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Juan R. Perilla, Jodi A. Hadden, Boon Chong Goh, Christopher G. Mayne, and Klaus Schulten. 2016. All-atom molecular dynamics of virus capsids as drug targets. J. Phys. Chem. Lett. 7 (2016), 1836--1844.Google ScholarGoogle ScholarCross RefCross Ref
  6. James C.Phillips, Rosemary Braun, Wei Wang, James Gumbart, Emad Tajkhorshid, Elizabeth Villa, Christophe Chipot, Robert D. Skeel, Laxmikant Kale, and Klaus Schulten. 2005. Scalable Molecular Dynamics with NAMD. J. Comp. Chem. 26 (2005), 1781--1802.Google ScholarGoogle ScholarCross RefCross Ref
  7. James C. Phillips, John E. Stone, and Klaus Schulten. 2008. Adapting a Message-Driven Parallel Application to GPU-Accelerated Clusters. In SC '08: Proceedings of the 2008 ACM/IEEE Conference on Supercomputing. IEEE Press, Piscataway, NJ, USA. (9 pages). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. James C. Phillips, John E. Stone, Kirby L. Vandivort, Timothy G. Armstrong, Justin M. Wozniak, Michael Wilde, and Klaus Schulten. 2014. Petascale Tcl with NAMD, VMD, and Swift/T. In SC'14 workshop on High Performance Technical Computing in Dynamic Languages (SC '14). IEEE Press, 6--17. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. James C. Phillips, Yanhua Sun, Nikhil Jain, Eric J. Bohm, and Laximant V. Kalé. 2014. Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (SC '14). IEEE Press, 81--91. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. John E. Stone, Antti-Pekka Hynninen, James C. Phillips, and Klaus Schulten. 2016. Early Experiences Porting the NAMD and VMD Molecular Simulation and Analysis Software to GPU-Accelerated OpenPOWER Platforms. International Workshop on OpenPOWER for HPC (IWOPH'16) (2016), 188--206.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. What You Should Know About NAMD and Charm++ But Were Hoping to Ignore

                Recommendations

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in
                • Published in

                  cover image ACM Other conferences
                  PEARC '18: Proceedings of the Practice and Experience on Advanced Research Computing
                  July 2018
                  652 pages
                  ISBN:9781450364461
                  DOI:10.1145/3219104

                  Copyright © 2018 ACM

                  © 2018 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the United States Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

                  Publisher

                  Association for Computing Machinery

                  New York, NY, United States

                  Publication History

                  • Published: 22 July 2018

                  Permissions

                  Request permissions about this article.

                  Request Permissions

                  Check for updates

                  Qualifiers

                  • research-article
                  • Research
                  • Refereed limited

                  Acceptance Rates

                  PEARC '18 Paper Acceptance Rate79of123submissions,64%Overall Acceptance Rate133of202submissions,66%
                • Article Metrics

                  • Downloads (Last 12 months)442
                  • Downloads (Last 6 weeks)116

                  Other Metrics

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader