Abstract
This paper describes a parallel programming language and a multi-level parallelism computer architecture, which have been developed by an integrated design process.The ASTOR language is an imperative intermediate language that combines parallel control constructs with a concise module concept. The ASTOR architecture is directed towards reliable execution of programs written in the ASTOR language, utilizing five levels of parallelism expressed by the language constructs. Structure preservation has been applied as a major design principle. The architecture can be characterized as a message passing multiprocessor whose nodes consist of decoupled program flow control and data object processing parts, executing by token-passing similar to large-grain data flow architectures.
- [1] Ungerer, T., Zehendner, E.: A Parallel Programming Language Directed Towards Top-Down Software Development. Proc. Int. Conference on Parallel Processing, Part II, pp. 122-125, St. Charles, Illinois, August 8-12, 1989.Google Scholar
- [2] Zehendner, E., Ungerer, T.: A Module-Based Language for Parallel Programming. Proc. 10th Tunisian-French Seminar of Computer Science, pp. 77-89, Tunis, May 23-25, 1989.Google Scholar
- [3] Zehendner, E., Ungerer, T.: The ASTOR Architecture. Proc. 7th International Conference on Distributed Computing Systems, pp. 424-430, Berlin, September 21-25, 1987.Google Scholar
- [4] Ungerer, T., Zehendner, E.: A Parallel Computer Architecture Directed Towards Modular Concurrent Programming. Proc. 8th SCCC Int. Conf. on Computer Science, Part VIII-C-40, pp. 81-88, Santiago de Chile, July 4-8, 1988.Google Scholar
- [5] Zehendner, E.: Structure-Oriented Computer Architectures. Proc. 10th International Conference on Distributed Computing Systems, pp. 344-351, Paris, May 28-June 1, 1990.Google ScholarCross Ref
- [6] Kennaway, J. R., Sleep, M. R.: The "Language First" Approach. In: Chambers, F. B., Duce, D. A., Jones, G. P. (eds.): Distributed Computing. pp. 111-124. New York: Academic Press 1984.Google Scholar
- [7] Töpfer, H.-J., Ungerer, T., Zehendner, E.: Computer Architectures: A Case For Top-Down Design. Proc. IEEE Workshop on Future Trend of Distributed Computing Systems in the '90s, pp. 445-451, Hong Kong, Sep. 1988.Google ScholarCross Ref
- [8] Giloi, W. K., Behr, P.: Hierarchical Function Distribution - A Design Principle For Advanced Multicomputer Architectures. Proc. 10th International Symposium on Computer Architecture, pp. 318-325, Stockholm, 1983. Google ScholarDigital Library
- [9] Giloi, W. K., Berg, H. K.: Introducing the Concept of Data Structure Architectures. Proc. 1977 International Conference on Parallel Processing, pp. 44-51.Google Scholar
- [10] Glueck-Hilltop, E., Ramlow, M., Schuerfeld, U.: The Stollmann Data Flow Machine. In: Odijk, E., Rem, M., Syre, J.-C. (Eds.): Proceedings PARLE '89, Parallel Architectures and Languages Europe, Eindhoven, June 1989, Vol. 1, pp. 433-457, Springer-Verlag LNCS 365. Google Scholar
- [11] Evripidou, P., Gaudiot, J.-L.: The USC Decoupled Multilevel Data-Flow Execution Model. In: Gaudiot, J.-L., Bic, L. (Eds.): Advanced Topics in Data-Flow Computing, pp. 347-379, Prentice Hall, Englewood Cliffs 1991.Google Scholar
- [12] Dai, K., Giloi, W. K.: A Basic Architecture Supporting LGDF Computation. 1990 International Conference on Supercomputing, pp. 23-33, Amsterdam, June 11-15, 1990. Google ScholarDigital Library
- [13] Ungerer, T., Zehendner, E.: Language Abstractions for Concurrency Control. Proc. International Computer Science Conference '88, pp. 146-151, Hong Kong, December 19-21, 1988.Google Scholar
- [14] Karp, A. H.: Programming for Parallelism. IEEE Computer, Vol. 20, No. 5, pp. 43-57, May 1987. Google ScholarDigital Library
- [15] Babb II, R. G.: Parallel Processing with Large-Grain Data Flow Techniques. IEEE Computer, pp. 55-61, May 1984.Google ScholarDigital Library
- [16] Giloi, W. K.: The Suprenum Architecture. Proc. CONPAR 88. Google ScholarDigital Library
- [17] Thompson, J. R.: The Cray-1, the Cray X-MP, the Cray-2 and Beyond: The Supercomputers of Cray Research. In: Fernbach, S. (Ed.): Supercomputers. North-Holland 1986.Google Scholar
- [18] Purcell, C. J.: Introduction to the ETA10. In: Brebbia, C. A., Orszag, S. A. (Eds.): Supercomputers and Fluid Dynamics. Proceedings of the First Nobeyama Workshop, September 3-6, 1985, Springer-Verlag LNE 24.Google Scholar
- [19] Test, J. A., Myszewski, M., Swift, R. C.: The Alliant FX/Series: A Language Driven Architecture for Parallel Processing of Dusty Deck Fortran. In: de Bakker, J. W., Nijman, A. J., Treleaven, P. C. (Eds.): Proc. PARLE, Eindhoven, June 1986, Vol. 1, Springer-Verlag, LNCS 258. Google ScholarDigital Library
- [20] Kuck, D. J., Davidson, E. S., Lawrie, D. H., Sameh, A. H.: Parallel Supercomputing Today and the Cedar Approach. Science, Vol. 231, pp. 967-974, February 28, 1986.Google ScholarCross Ref
- [21] Liskov, B., et al.: CLU Reference Manual. Springer-Verlag, LNCS 114, 1981. Google ScholarDigital Library
- [22] Kulisch, U. W., Miranker, W. L.: A New Approach to Scientific Computation. New York: Academic Press 1983.Google Scholar
- [23] Zehendner, E., Haak, M.: Recent Improvements on the Concept of Conditional Critical Regions. Microprocessing and Microprogramming, Vol. 28, pp. 15-18, 1990. Google ScholarDigital Library
- [24] Giloi, W. K.: The DRAMA Principle and Data Type Architectures. In: Niedereichholz, J. (Ed.): Datenbank-technologie, pp. 81-100, Stuttgart: Teubner 1979. Google ScholarDigital Library
- [25] Arvind, Iannucci, R. A.: Two Fundamental Issues in Multiprocessing. In: Proc. DFVLR Conf. 1987 on Parallel Processing in Science and Engineering, pp. 61-88, Bonn, Germany, Springer-Verlag LNCS 295, June 1987. Google ScholarDigital Library
Index Terms
- A multi-level parallelism architecture
Recommendations
Language support for multi-paradigm and multi-grain parallelism on SMP-Cluster
The characteristics of large-scale parallel applications are multi-paradigm and multi-grain parallel in essence. The key factor in improving the performance of parallel application systems is to determine suitable parallel paradigms and grains according ...
Exploiting Loop-Level Parallelism for SIMD Arrays Using OpenMP
IWOMP '07: Proceedings of the 3rd international workshop on OpenMP: A Practical Programming Model for the Multi-Core EraProgramming SIMD arrays in languages such as C or FORTRAN is difficult and although work on automatic parallelizing programs has achieved much, it is far from satisfactory. In particular, almost all `fully' automatic parallelizing compilers place ...
Dual-level parallelism for ab initio molecular dynamics: Reaching teraflop performance with the CPMD code
We show teraflop performance of the fully featured ab initio molecular dynamics code CPMD on an IBM pSeries 690 cluster. A mixed distributed-memory, coarse-grained parallel approach using the MPI library and shared-memory, fine-grained parallelism using ...
Comments