ABSTRACT
Static Single Assignment (SSA) has become the intermediate program representation of choice in most modern compilers because it enables efficient data flow analysis of scalars and thus leads to better scalar optimizations. Unfortunately not much progress has been achieved in applying the same techniques to array data flow analysis, a very important and potentially powerful technology. In this paper we propose to improve the applicability of previous efforts in array SSA through the use of a symbolic memory access descriptor that can aggregate the accesses to the elements of an array over large, interprocedural program contexts. We then show the power of our new representation by using it to implement a basic data flow algorithm, reaching definitions. Finally we apply this analysis to array constant propagation and array privatization and show performance improvement (speedups) for benchmark codes.
- R. A. Ballance, A. B. Maccabe, and K. J. Ottenstein. The Program Dependence Web: A representation supporting control-, data-, and demand-driven interpretation of imperative languages. In ACM PLDI, pp. 257--271, White Plains, NY, 1990.]] Google ScholarDigital Library
- W. Blume, et. al. Advanced Program Restructuring for High-Performance Computers with Polaris. IEEE Computer, 29(12):78--82, Dec. 1996.]] Google ScholarDigital Library
- M. Burke. An interval-based approach to exhaustive and incremental interprocedural data-flow analysis. ACM TOPLAS., 12(3):341--395, 1990.]] Google ScholarDigital Library
- D. Callahan and K. Kennedy. Analysis of interprocedural side effects in a parallel programming environment. In Proc. Supercomputing, 1st Int. Conf., LNCS 297, pp. 138--171, Athens, Greece, 1987.]] Google ScholarDigital Library
- L. Carter, B. Simon, B. Calder, L. Carter, and J. Ferrante. Predicated static single assignment. In IEEE PACT '99, pp. 245, Washington, DC, 1999.]] Google ScholarDigital Library
- D. R. Chakrabarti and P. Banerjee. Static single assignment form for message-passing programs. Int. J. of Parallel Programming, 29(2):139--184, 2001.]] Google ScholarDigital Library
- J.-F. Collard. Array SSA for explicitly parallel programs. In Proc. of Euro-Par, LNCS 1685, pp. 383--390, Toulouse, France, 1999.]] Google ScholarDigital Library
- J.-F. Collard, D. Barthou, and P. Feautrier. Fuzzy array dataflow analysis. In PPOPP '95, pp. 92--101, New York, NY, USA, 1995.]] Google ScholarDigital Library
- B. Creusillet and F. Irigoin. Exact vs. approximate array region analysis. In LCPC, LNCS 1239, pp. 86--100, San Jose, CA, 1996.]] Google ScholarDigital Library
- R. Cytron, et al. An efficient method of computing static single assignment form. In 16th ACM POPL, pp. 25--35, Austin, TX., Jan. 1989.]] Google ScholarDigital Library
- P. Feautrier. Dataflow analysis of array and scalar references. Int. J. of Parallel Programming, 20(1):23--54, 1991.]]Google ScholarCross Ref
- T. Gross and P. Steenkiste. Structured dataflow analysis for arrays and its use in an optimizing compilers. Software: Practice & Experience, 20(2):133--155, 1990.]] Google ScholarDigital Library
- J. Gu, Z. Li, and G. Lee. Symbolic array dataflow analysis for array privatization and program parallelization. In Proc. of IEEE/ACM Conf. Supercomputing, pp. 47. Pittsburgh, OR, 1995.]] Google ScholarDigital Library
- M. R. Haghighat and C. D. Polychronopoulos. Symbolic analysis for parallelizing compilers. ACM TOPLAS, 18(4):477--518, 1996.]] Google ScholarDigital Library
- M. H. Hall, S. P. Amarasinghe, B. R. Murphy, S.-W. Liao, and M. S. Lam. Detecting coarse-grain parallelism using an interprocedural parallelizing compiler. In Supercomputing '95, pp. 49, 1995.]] Google ScholarDigital Library
- K. Knobe and V. Sarkar. Array SSA form and its use in parallelization. In ACM POPL, pp. 107--120, San Diego, CA, Jan. 1998.]] Google ScholarDigital Library
- V. Maslov. Lazy array data-flow dependence analysis. In ACM POPL, pp. 311--325, Portland, OR, Jan. 1994.]] Google ScholarDigital Library
- D. E. Maydan, S. P. Amarasinghe, and M. S. Lam. Array data-flow analysis and its use in array privatization. In ACM POPL, pp. 2--15, Charleston, SC, Jan. 1993.]] Google ScholarDigital Library
- S. Moon, M. W. Hall, and B. R. Murphy. Predicated array data-flow analysis for run-time parallelization. ACM Int. Conf. Supercomputing, pp. 204--211, Melbourne, Australia, 1988.]] Google ScholarDigital Library
- Y. Paek, J. Hoeflinger, and D. Padua. Efficient and precise array access analysis. ACM TOPLAS, 24(1):65--109, 2002.]] Google ScholarDigital Library
- W. Pugh and D. Wonnacott. An exact method for analysis of value-based array data dependences. In LCPC 1993, LNCS 768, pp. 546--566, Portland, OR.]] Google ScholarDigital Library
- W. Pugh and D. Wonnacott. Nonlinear array dependence analysis. UMIACS-TR-94-123, Univ. of Maryland, College Park, MD, USA, 1994.]] Google ScholarDigital Library
- S. Rus, J. Hoeflinger, and L. Rauchwerger. Hybrid analysis: static & dynamic memory reference analysis. Int. J. of Parallel Programming, 31(3):251--283, 2003.]] Google ScholarDigital Library
- V. Sarkar and K. Knobe. Enabling sparse constant propagation of array elements via array ssa form. In Proc. Static Analysis Symp., LNCS 1503, pp. 33--56, Pisa, Italy, Sept., 1998.]]Google ScholarCross Ref
- N. Schwartz. Sparse constant propagation via memory classification analysis. TR1999-782, Dept. of Computer Science, Courant Institute, NYU, March, 1999.]]Google Scholar
- R. Triolet, F. Irigoin, and P. Feautrier. Direct parallelization of Call statements. In ACM Symp. on Compiler Construction, pp. 175--185, Palo Alto, CA, June 1986.]] Google ScholarDigital Library
- P. Tu and D. Padua. Gated SSA-based demand-driven symbolic analysis for parallelizing compilers. In 9th ACM Int. Conf. on Supercompting, Barcelona, Spain, pp. 414--423, July 1995.]] Google ScholarDigital Library
- P. Tu and D. A. Padua. Automatic array privatization. In LCPC, LNCS 768 Portland, OR, 1993.]] Google ScholarDigital Library
- P. Vanbroekhoven, G. Janssens, M. Bruynooghe, H. Corporaal, and F. Catthoor. Advanced copy propagation for arrays. In ACM LCTES '03, pp. 24--33, San Diego, CA, June, 2003.]] Google ScholarDigital Library
- D. Wonnacott. Extending scalar optimizations for arrays. In LCPC '00, LNCS 2017, pp. 97--111, Yorktown Heights, NY, 2000.]] Google ScholarDigital Library
Index Terms
- Region array SSA
Recommendations
Array SSA form and its use in parallelization
POPL '98: Proceedings of the 25th ACM SIGPLAN-SIGACT symposium on Principles of programming languagesStatic single assignment (SSA) form for scalars has been a significant advance. It has simplified the way we think about scalar variables. It has simplified the design of some optimizations and has made other optimizations more effective. Unfortunately ...
Increasing the Scope and Resolution of Interprocedural Static Single Assignment
SAS '09: Proceedings of the 16th International Symposium on Static AnalysisWhile intraprocedural Static Single Assignment (SSA) is ubiquitous in modern compilers, the use of interprocedural SSA, although seemingly a natural extension, is limited. We find that part of the impediment is due to the narrow scope of variables ...
Interprocedural Array Region Analyses
Special issue: selected papers from the eighth international workshop on languages and compilers for parallel computingMany program optimizations require exact knowledge of the sets of array elements that are referenced in or that flow between statements or procedures. Some examples are array privatization, generation of communications in distributed memory machines, or ...
Comments