- Can89 D.C. Cann. Compilation Techniques for High PevfoT'mance Applicative Computatiom PhD thesis, Colorado State University, Computer Science I)epartment, Fort Collins, CO, 1989. Google ScholarDigital Library
- CHH89 Ron Cytron, Michael Hind, and Wilson Hsieh. Automatic generation of dag parallelism. Proceedings of the 1989 SIGPLAN Conference on Programming Language Design and Implementation, 24(7):54-68, July 1989. Google ScholarDigital Library
- CLOS87 David C. Cann, Ching-Cheng Lee, R. R. Oldehoeft, and S. K. Skedzielewski. SISAL Mulliprocessing Support. Technical Report UCiD-21115, Lawrence Livermore National Laboratory, Livermore, CA, 1987.Google Scholar
- FOW87 j. Ferrante, K. Ottenstein, and J. Warren. The program dependence graph and its use in optimization. A CM Transactions on Programming Languages and Systems, 319-349, July 1987. Google ScholarDigital Library
- Ker71 Brian W. Kernighan. Optimal sequential partitions of graphs. JACM, 18(1 ):34-40, January 1971. Google ScholarDigital Library
- KKP*81 D. J. Kuck, R. H. Kuhn, D. A. Padua, B. Leasure, and M. Wolfe. Dependence graphs and compiler optimizations. Conference Record of 8th A CM Symposium on Principles of Programming Languages, 1981. Google ScholarDigital Library
- MSA*85 J. McGraw, S. Skedzielewski, S. Allan, R. Oldehoeft, J. Glauert, C. Kirkham, B. Noyce, and R. Thomas. SISAL: Streams and Iteration in a Single Assignment Language Reference Manual Version 1.~. Manual M-146, Rev. 1, Lawrence Livermore National Laboratory, Livermore, CA, March 1985. No. M-146, Rev. 1.Google Scholar
- OC88 R.R. Oldehoeft and D. C. Cann. Applicative parallelism on a shared memory multiprocessor. IEEE Software, 5(1):62- 70, January 1988. Google ScholarDigital Library
- Sar89a Vivek Sarkar. Determining average program execution times and their variance. Proceedings of the 1989 SIGPLAN Conference on Programming Language Design and Implementation, 24(7):298- 312, July 1989. Google ScholarDigital Library
- Sar89b Vivek Sarkar. Partitioning and Scheduling Parallel Programs for Multiprocessots. Research Monographs in Parallel and Distributed Computing, Pitman, London and The MIT Press, Cambridge, Massachusetts, 1989. In the series, Research Monographs in Parallel and Distributed Computing. Google ScholarDigital Library
- SC90 Vivek Sarkar and David Cann. Posc --a partitioning and optimizing sisal compiler. To appear in the Proceedings of the A CM 1990 International Conference on Supercomputing, June 1990. Amsterdam, the Netherlands. Google ScholarDigital Library
- SG85 S. Skedzielewski and J. Glauert. IF1 - An Intermediate Form for Applicative Languages. Manual M-170, Lawrence Livermore National Laboratory, Livermore, CA, July 1985. No. M-170.Google Scholar
- SH86 V. Sarkar and J. Hennessy. Partitioning parallel programs for maero-dataflow. In Proceedings of the A CM Conference on Lisp and functional programming, pages 202-211, August 1986. Google ScholarDigital Library
- SW85 S. K. Skedzielewski and M. L. Welcome. Data flow graph optimization in IF1. In Jean-Pierre Jouannaud, editor, Functional Programming Languages and Computer Architecture, pages 17- 34, Springer-Verlag, New York, NY, September 1985. Google ScholarDigital Library
- SYO87 S.K. Skedzielewski, R. K. Yates, and R. R. Oldehoeft. DI: an interactive debugging interpreter for applicative languages. In Proceedings of the A CM SIC- PLAN 87 Symposium on Interpreters and Interpretive Techniques, pages 102- 109, June 1987. Google ScholarDigital Library
Index Terms
- Instruction reordering for fork-join parallelism
Recommendations
Converting thread-level parallelism to instruction-level parallelism via simultaneous multithreading
To achieve high performance, contemporary computer systems rely on two forms of parallelism: instruction-level parallelism (ILP) and thread-level parallelism (TLP). Wide-issue super-scalar processors exploit ILP by executing multiple instructions from a ...
Helper locks for fork-join parallel programming
PPoPP '10: Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel ProgrammingHelper locks allow programs with large parallel critical sections, called parallel regions, to execute more efficiently by enlisting processors that might otherwise be waiting on the helper lock to aid in the execution of the parallel region. Suppose ...
Helper locks for fork-join parallel programming
PPoPP '10Helper locks allow programs with large parallel critical sections, called parallel regions, to execute more efficiently by enlisting processors that might otherwise be waiting on the helper lock to aid in the execution of the parallel region. Suppose ...
Comments