ABSTRACT
Communication overhead is one of the dominant factors affecting performance in high-performance computing systems. To reduce the negative impact of communication, programmers overlap communication and computation by using asynchronous communication primitives. This increases code complexity, requiring more development effort and making less readable programs. This paper presents the hybrid use of MPI and SMPSs (SMP superscalar, a task-based shared-memory programming model) that allows the programmer to easily introduce the asynchrony necessary to overlap communication and computation. We demonstrate the hybrid use of MPI/SMPSs with the high-performance LINPACK benchmark (HPL), and compare it to the pure MPI implementation, which uses the look-ahead technique to overlap communication and computation. The hybrid MPI/SMPSs version significantly improves the performance of the pure MPI version, getting close to the asymptotic performance at medium problem sizes and still getting significant benefits at small/large problem sizes.
- J.M. Perez, L. Martinell, R.M. Badia and J. Labarta. A dependency aware task-based programming environment for multi-core architecture. Proceedings of IEEE Cluster 2008, September 2008.Google ScholarCross Ref
- V. Marjanovic, J.M. Perez, E. Ayguadé, J. Labarta and M. Valero. Overlapping Communication and Computation by Using a Hybrid MPI/SMPSs Approach. UPC-DAC-RR-2009-35 Research Report, Technical University of Catalunya. May 2009.Google Scholar
- J. Dongarra, P. Luszczek and A. Petitet. The LINPACK Benchmark: Past, Present and Future. Concurrency and Computation: Practice and Experience. Vol. 15, issue 9, pp. 803--820. 2003.Google ScholarCross Ref
Index Terms
- Effective communication and computation overlap with hybrid MPI/SMPSs
Recommendations
Overlapping communication and computation by using a hybrid MPI/SMPSs approach
ICS '10: Proceedings of the 24th ACM International Conference on SupercomputingCommunication overhead is one of the dominant factors affecting performance in high-end computing systems. To reduce the negative impact of communication, programmers overlap communication and computation by using asynchronous communication primitives. ...
Effective communication and computation overlap with hybrid MPI/SMPSs
PPoPP '10Communication overhead is one of the dominant factors affecting performance in high-performance computing systems. To reduce the negative impact of communication, programmers overlap communication and computation by using asynchronous communication ...
Overlapping communication and computation with OpenMP and MPI
Machines comprised of a distributed collection of shared memory or SMP nodes are becoming common for parallel computing. OpenMP can be combined with MPI on many such machines. Motivations for combing OpenMP and MPI are discussed. While OpenMP is ...
Comments