ABSTRACT
In prior work, we have proposed techniques to extend the ease of shared-memory parallel programming to distributed-memory platforms by automatic translation of OpenMP programs to MPI. In the case of irregular applications, the performance of this translation scheme is limited by the fact that accesses to shared-data cannot be accurately resolved at compile-time. Additionally, irregular applications with high communication to computation ratios pose challenges even for direct implementation on message passing systems. In this paper, we present combined compile-time/run-time techniques for optimizing irregular shared-memory applications on message passing systems in the context of automatic translation from OpenMP to MPI. Our transformations enable computation-communication overlap by restructuring irregular parallel loops. The compiler creates inspectors to analyze actual data access patterns for irregular accesses at runtime. This analysis is combined with the compile-time analysis of regular data accesses to determine which iterations of irregular loops access non-local data. The iterations are then reordered to enable computation-communication overlap. In the case where the irregular access occurs inside nested loops, the loop nest is restructured. We evaluate our techniques by translating OpenMP versions of three benchmarks from two important classes of irregular applications - sparse matrix computations and molecular dynamics. We find that for these applications, on sixteen nodes, versions employing computation-communication overlap are almost twice as fast as baseline OpenMP-to-MPI versions, almost 30% faster than inspector-only versions, almost 25% faster than hand-coded versions on two applications and about 9% slower on the third.
- T. S. Abdelrahman and G. Liu. Overlap of computation and communication on shared-memory networks-of-workstations. Cluster computing, pages 35--45, 2001.]] Google ScholarDigital Library
- N. Adiga, G. Almasi, G. Almasi, Y. Aridor, R. Barik, D. Beece, R. Bellofatto, G. Bhanot, R. Bickford, M. Blumrich, A. Bright, and J. An overview of the BlueGene/L supercomputer. In SC2002 -- High Performance Networking and Computing, Baltimore, MD, November 2002.]] Google ScholarDigital Library
- G. Almási, R. Bellofatto, J. Brunheroto, C. Caşcaval, J. G. C. nos, L. Ceze, P. Crumley, C. Erway, J. Gagliano, D. Lieber, X. Martorell, J. E. Moreira, A. Sanomiya, and K. Strauss. An overview of the BlueGene/L system software organization. In Proceedings of Euro-Par 2003 Conference, Lecture Notes in Computer Science, Klagenfurt, Austria, August 2003. Springer-Verlag.]]Google Scholar
- V. Aslot, M. Domeika, R. Eigenmann, G. Gaertner, W. B. Jones, and B. Parady. SPEComp: A New Benchmark Suite for Measuring Parallel Computer Performance. In Proc. of the Workshop on OpenMP Applications and Tools (WOMPAT2001), Lecture Notes in Computer Science, 2104, pages 1--10, July 2001.]] Google ScholarDigital Library
- A. Basumallik and R. Eigenmann. Towards automatic translation of openmp to mpi. In ICS '05: Proceedings of the 19th annual International Conference on Supercomputing, pages 189--198, Cambridge, Massachusetts, USA, 2005. ACM Press.]] Google ScholarDigital Library
- B. R. Brooks, R. E. Bruccoleri, B. D. Olafson, D. J. States, S. Swaminathan, and M. Karplus. Charmm: A program for macromolecular energy, minimization, and dynamics calculations. J. Comp. Chem., 4:187--217, 1983.]]Google ScholarCross Ref
- G. Burns, R. Daoud, and J. Vaigl. LAM: An Open Cluster Environment for MPI. In Proceedings of Supercomputing Symposium, pages 379--386, 1994.]]Google Scholar
- S. Carr, K. S. McKinley, and C.-W. Tseng. Compiler optimizations for improving data locality. In ASPLOS-VI: Proceedings of the sixth international conference on Architectural support for programming languages and operating systems, pages 252--262, New York, NY, USA, 1994. ACM Press.]] Google ScholarDigital Library
- F. Darema, D. A. George, V. A. Norton, and G. F. Pfister. A single-program-multiple-data computational model for epex/fortran. Parallel Computing, 7(1):11--24, 1988.]]Google ScholarCross Ref
- R. Das, D. Mavriplis, J. Saltz, S. Gupta, and R. Ponnusamy. The design and implementation of a parallel unstructured Euler solver using software primitives, AIAA-92-0562. In Proceedings of the 30th Aerospace Sciences Meeting, 1992.]]Google Scholar
- R. Das, M. Uysal, J. Saltz, and Y.-S. S. Hwang. Communication optimizations for irregular scientific computations on distributed memory architectures. Journal of Parallel and Distributed Computing, 22(3):462--478, 1994.]] Google ScholarDigital Library
- R. Das, J. Wu, J. Saltz, H. Berryman, and S. Hiranandani. Distributed memory compiler design for sparse problems. IEEE Trans. Comput., 44(6):737--753, 1995.]] Google ScholarDigital Library
- C. Ding and K. Kennedy. Improving cache performance in dynamic applications through data and computation reorganization at run time. In PLDI '99: Proceedings of the ACM SIGPLAN 1999 conference on Programming language design and implementation, pages 229--241, New York, NY, USA, 1999. ACM Press.]] Google ScholarDigital Library
- M. P. I. Forum. MPI: A Message-Passing Interface Standard. Technical Report UT-CS-94-230, 1994.]] Google ScholarDigital Library
- O. Forum. OpenMP: A Proposed Industry Standard API for Shared Memory Programming. Technical report, Oct. 1997.]]Google Scholar
- W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A high-performance, portable implementation of the MPI message passing interface standard. Parallel Computing, 22(6):789--828, Sept. 1996.]] Google ScholarDigital Library
- M. Gupta, S. Midkiff, E. Schonberg, V. Seshadri, D. Shields, K. Wang, W. Ching, and T. Ngo. An HPF compiler for the IBM SP2. In Proceedings of Supercomputing '95, San Diego, CA, 1995.]] Google ScholarDigital Library
- H. Han and C.-W. Tseng. A comparison of locality transformations for irregular codes. In LCR '00: Selected Papers from the 5th International Workshop on Languages, Compilers, and Run-Time Systems for Scalable Computers, pages 70--84, London, UK, 2000. Springer-Verlag.]] Google ScholarDigital Library
- P. Havlak and K. Kennedy. An implementation of interprocedural bounded regular section analysis. IEEE Transactions on Parallel and Distributed Systems, 2(3):350--360, 1991.]] Google ScholarDigital Library
- High Performance Fortran Forum. High Performance Fortran language specification, version 1.0. Technical Report CRPC-TR92225, Houston, Tex., 1993.]]Google Scholar
- P. N. Hilfinger, D. Bonachea, D. Gay, S. Graham, B. Liblit, G. Pike, and K. Yelick. Titanium language reference manual. Technical report, Berkeley, CA, USA, 2001.]] Google ScholarDigital Library
- D. Hisley, G. Agrawal, P. Satya-narayana, and L. Pollock. Porting and performance evaluation of irregular codes using OpenMP. Concurrency: Practice and Experience, 12(12):1241--1259, 2000.]]Google ScholarCross Ref
- E.-J. Im and K. A. Yelick. Optimizing sparse matrix computations for register reuse in sparsity. In ICCS '01: Proceedings of the International Conference on Computational Sciences-Part I, pages 127--136, London, UK, 2001. Springer-Verlag.]] Google ScholarDigital Library
- K. Ishizaki, H. Komatsu, and T. Nakatani. "a loop transformation algorithm for communication overlapping". International Journal of Parallel Programming, 28(2):135--154, 2000.]] Google ScholarDigital Library
- H. Jin, M. Frumkin, and J. Yan. The OpenMP Implementation of NAS Parallel Benchmarks and Its Performance. Technical Report NAS-99-011.]]Google Scholar
- K. Kennedy and K. S. McKinley. Loop distribution with arbitrary control flow. In Supercomputing '90: Proceedings of the 1990 ACM/IEEE conference on Supercomputing, pages 407--416, Washington, DC, USA, 1990. IEEE Computer Society.]]Google ScholarDigital Library
- A. Lain and P. Banerjee. Techniques to overlap computation and communication in irregular iterative applications. In ICS '94: Proceedings of the 8th international conference on Supercomputing, pages 236--245, New York, NY, USA, 1994. ACM Press.]] Google ScholarDigital Library
- S.-I. Lee, T. A. Johnson, and R. Eigenmann. Cetus - An Extensible Compiler Infrastructure for Source-to-Source Transformation. In Proc. of the Workshop on Languages and Compilers for Parallel Computing(LCPC'03), pages 539--553. (Springer-Verlag Lecture Notes in Computer Science), Oct. 2003.]]Google Scholar
- H. Lu, A. L. Cox, S. Dwarkadas, R. Rajamony, and W. Zwaenepoel. Compiler and software distributed shared memory support for irregular applications. In Proc. of the Sixth ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming (PPOPP'97), pages 48--56, 1997.]] Google ScholarDigital Library
- S.-J. Min, A. Basumallik, and R. Eigenmann. Optimizing OpenMP programs on Software Distributed Shared Memory Systems. International Journal of Parallel Programming, 31(3):225--249, June 2003.]] Google ScholarDigital Library
- N. Mitchell, L. Carter, and J. Ferrante. Localizing non-affine array references. In 1999 International Conference on Parallel Architectures and Compilation Techniques, pages 192--202, 1999.]] Google ScholarDigital Library
- R. Sass and M. Mutka. Enabling unimodular transformations. In Supercomputing '94: Proceedings of the 1994 conference on Supercomputing, pages 753--762, Los Alamitos, CA, USA, 1994. IEEE Computer Society Press.]]Google ScholarDigital Library
- H. Simon. Partitioning of unstructured problems for parallel processing. Computing Systems in Engineering, 2(2-3):135--148, 1991.]]Google ScholarCross Ref
- J. Su and K. Yelick. Automatic support for irregular computations in a high-level language. In Proceedings of the 19th International Parallel and Distributed Processing Symposium, Denver, Colorado, 2005.]] Google ScholarDigital Library
- P. Tu and D. Padua. Array privatization for shared and distributed memory machines (extended abstract). SIGPLAN Not., 28(1):64--67, 1993.]] Google ScholarDigital Library
- A. Yoshida, K. Koshizuka, and H. Kasahara. Data-localization for fortran macro-dataflow computation using partial static task assignment. In ICS '96: Proceedings of the 10th international conference on Supercomputing, pages 61--68, Philadelphia, Pennsylvania, USA, 1996. ACM Press.]] Google ScholarDigital Library
Index Terms
- Optimizing irregular shared-memory applications for distributed-memory systems
Recommendations
Optimizing irregular shared-memory applications for clusters
ICS '08: Proceedings of the 22nd annual international conference on SupercomputingIrregular applications pose challenges in optimizing communication, due to the difficulty of analyzing irregular data accesses accurately and efficiently. This challenge is especially big when translating irregular shared-memory applications to message-...
Exploiting Distributed-Memory and Shared-Memory Parallelism on Clusters of SMPs with Data Parallel Programs
Clusters of SMPs are hybrid-parallel architectures that combine the main concepts of distributed-memory and shared-memory parallel machines. Although SMP clusters are widely used in the high performance computing community, there exists no single ...
Towards automatic translation of OpenMP to MPI
ICS '05: Proceedings of the 19th annual international conference on SupercomputingWe present compiler techniques for translating OpenMP shared-memory parallel applications into MPI message-passing programs for execution on distributed memory systems. This translation aims to extend the ease of creating parallel applications with ...
Comments