Abstract
The advent of new parallel architectures has increased the need for parallel optimizing compilers to assist developers in creating efficient code. OpenUH is a state-of-the-art optimizing compiler, but it only performs a limited set of optimizations for OpenMP programs due to its conservative assumptions of shared memory programming. These limitations may prevent some OpenMP applications from being fully optimized to the extent of its sequential counterpart. This paper describes our design and implementation of a parallel data flow framework, consisting of a Parallel Control Flow Graph (PCFG) and a Parallel SSA (PSSA) representation in OpenUH, to model data flow for OpenMP programs. This framework enables the OpenUH compiler to perform all classical scalar optimizations for OpenMP programs, in addition to conducting OpenMP specific optimizations.
- Vasanth Balasundaram and Ken Kennedy. Compile-time detection of race conditions in a parallel program. In ICS '89: Proceedings of the 3rd international conference on Supercomputing, pages 175--185, Crete, Greece, June 1989. ACM Press. Google ScholarDigital Library
- D. Callahan, K. Kennedy, and J. Subhlok. Analysis of event synchronization in a parallel programming tool. In PPOPP '90: Proceedings of the second ACM SIGPLAN symposium on Principles & practice of parallel programming, pages 21--30, Seattle, Washington, United States, March 1990. ACM Press. Google ScholarDigital Library
- Jens Knoop, Bernhard Steffen, and Jurgen Vollmer. Parallelism for free: efficient and optimal bitvector analyses for parallel programs. ACM Trans. Program. Lang. Syst., 18(3):268--299, 1996. Google ScholarDigital Library
- Arvind Krishnamurthy and Katherine A. Yelick. Optimizing parallel programs with explicit synchronization. In SIGPLAN Conference on Programming Language Design and Implementation, pages 196--204, La Jolla, California, United States, June 1995. Google ScholarDigital Library
- J. Lee, D. A. Padua, and S. P. Midkiff. Basic compiler algorithms for parallel programs. In Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP'99), pages 1--12, Atlanta, Georgia, United States, August 1999. ACM SIGPLAN. Google ScholarDigital Library
- Jurgen Vollmer. Data flow analysis of parallel programs. In PACT '95: Proceedings of the IFIP WG10.3 working conference on Parallel architectures and compilation techniques , pages 168--177, Manchester, United Kingdom, 1995. IFIP Working Group on Algol. Google ScholarDigital Library
Index Terms
- Exploiting global optimizations for openmp programs in the openuh compiler
Recommendations
Exploiting global optimizations for openmp programs in the openuh compiler
PPoPP '09: Proceedings of the 14th ACM SIGPLAN symposium on Principles and practice of parallel programmingThe advent of new parallel architectures has increased the need for parallel optimizing compilers to assist developers in creating efficient code. OpenUH is a state-of-the-art optimizing compiler, but it only performs a limited set of optimizations for ...
A compiler for exploiting nested parallelism in OpenMP programs
OpenMpThis paper presents the design and implementation of a parallelization framework and OpenMP runtime support in Intel C++ & Fortran compilers for exploiting nested parallelism in applications using OpenMP pragmas or directives. We conduct the performance ...
OpenUH: an optimizing, portable OpenMP compiler: Research Articles
Current Trends in Compilers for Parallel Computers (CPC2006)OpenMP has gained wide popularity as an API for parallel programming on shared memory and distributed shared memory platforms. Despite its broad availability, there remains a need for a portable, robust, open source, optimizing OpenMP compiler for C/C++/...
Comments