skip to main content
10.1145/2608020.2608026acmconferencesArticle/Chapter ViewAbstractPublication PageshpdcConference Proceedingsconference-collections
research-article

A performance and energy analysis of I/O management approaches for exascale systems

Published: 23 June 2014 Publication History

Abstract

The advent of fast, unprecedentedly scalable, yet energy-hungry exascale supercomputers poses a major challenge consisting in sustaining a high performance per watt ratio. While much recent work has explored new approaches to I/O management, aiming to reduce the I/O performance bottleneck exhibited by HPC applications (and hence to improve application performance), there is comparatively little work investigating the impact of I/O management approaches on energy consumption.
In this work, we explore how much energy a supercomputer consumes while running scientific simulations when adopting various I/O management approaches. We closely examine three radically different I/O schemes including time partitioning, dedicated cores, and dedicated nodes. We implement the three approaches within the Damaris I/O middleware and perform extensive experiments with one of the target HPC applications of the Blue Waters sustained-petaflop supercomputer project: the CM1 atmospheric model. Our experimental results obtained on the French Grid'5000 platform highlight the differences between these three approaches and illustrate in which way various configurations of the application and of the system can impact performance and energy consumption.

References

[1]
James Hamilton, Cost of Power in Large-Scale Data Centers . http://perspectives.mvdirona.com/2008/11/28/ CostOfPowerInLargeScaleDataCenters.aspx, November2008.
[2]
R. Bolze, F. Cappello, E. Caron, M. Daydé F. Desprez, E. Jeannot, Y. Jégou, S. Lanteri, J. Leduc, N. Melab, et al. Grid '5000: a large scale and highly reconfigurable experimental grid testbed. International Journal of High Performance Computing Applications, 20(4):481,2006.
[3]
G. H. Bryan and J. M. Fritsch. A benchmark simulation for moist nonhydrostatic numerical models. Monthly Weather Review,130(12):2917--2928,2002.
[4]
P. H. Carns, W. B. Ligon, III, R. B. Ross, and R. Thakur. PVFS: a parallel file system for linux clusters. In Proceedings of the 4th annual Linux Showcase & Conference - Volume 4, Berkeley, CA, USA, 2000. USENIX Association.
[5]
M. Dorier, G. Antoniu, F. Cappello, M. Snir, and L. Orf. Damaris: Leveraging Multicore Parallelism to Mask I/O Jitter. Research report RR-7706, INRIA, Dec 2011.
[6]
M. Dorier, G. Antoniu, F. Cappello, M. Snir, and L. Orf. Damaris: How to Efficiently Leverage Multicore Parallelism to Achieve Scalable, Jitter-free I/O. In Proceedings of the 2012 IEEE International Conference on Cluster Computing, Cluster'14, pages 155--163, Sept. 2012.
[7]
M. Gamell, I. Rodero, M. Parashar, J. C. Bennett, H. Kolla, J. Chen, P.-T. Bremer, A. G. Landge, A. Gyulassy, P. McCormick, S. Pakin, V. Pascucci, and S. Klasky. Exploringpower behaviors and trade-offs of in-situ data analytics. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC '13, pages 77:1--77:12, New York, NY, USA, 2013. ACM.
[8]
A. Hoisie and V. Getov. Extreme-Scale Computing - Where 'Just More of the Same' Does Not Work. Computer, 42(11):24--26, Nov. 2009.
[9]
J. H. Laros, III, K. T. Pedretti, S. M. Kelly, W. Shu, and C. T. Vaughan. Energy based performance tuning for large scale high performance computing systems. In Proceedings of the 2012 Symposium on High Performance Computing, HPC '12, pages 6:1--6:10, San Diego, CA, USA, 2012. Society for Computer Simulation International.
[10]
J. Lofstead, F. Zheng, Q. Liu, S. Klasky, R. Oldfield, T. Kordenbrock, K. Schwan, and M. Wolf. Managing Variability in the IO Performance of Petascale Storage Systems. In Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC'10, pages1--12,Washington,DC, USA, 2010. IEEE Computer Society.
[11]
NCSA. BlueWaters project, http://www.ncsa.illinois.edu/BlueWaters/.
[12]
C. Patel, R. Sharma, C. Bash, and S. Graupner. Energy aware grid: Global workload placement based on energy efficiency. HPL Technical Report, HPL-2002--329, Nov. 2002.
[13]
D. Skinner and W. Kramer. Understanding the Causes of Performance Variability in HPC Workloads. In Proceedings of the IEEE International Workload Characterization Symposium, pages 137--149,Oct.2005.
[14]
F. Zheng, H. Abbasi, C. Docan, J. Lofstead, Q. Liu, S. Klasky, M. Parashar, N. Podhorszki, K. Schwan, and M. Wolf. PreDatA - Preparatory Data Analytics on Peta-Scale Machines. In Proceedings of the 2010 IEEE International Symposium on Parallel Distributed Processing, IPDPS'10, pages 1--12, April 2010.

Cited By

View all
  • (2023)A Preliminary Empirical Study of the Power Efficiency of Matrix MultiplicationElectronics10.3390/electronics1207159912:7(1599)Online publication date: 29-Mar-2023
  • (2020)SeeSAw: Optimizing Performance of In-Situ Analytics Applications under Power Constraints2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)10.1109/IPDPS47924.2020.00086(789-798)Online publication date: May-2020
  • (2019)Investigating power efficiency of mergesortThe Journal of Supercomputing10.1007/s11227-019-02850-5Online publication date: 11-Apr-2019
  • Show More Cited By

Index Terms

  1. A performance and energy analysis of I/O management approaches for exascale systems

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    DIDC '14: Proceedings of the sixth international workshop on Data intensive distributed computing
    June 2014
    62 pages
    ISBN:9781450329132
    DOI:10.1145/2608020
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 23 June 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. damaris
    2. dedicated cores
    3. dedicated nodes
    4. energy
    5. exascale
    6. i/o
    7. time partitioning

    Qualifiers

    • Research-article

    Conference

    HPDC'14
    Sponsor:

    Acceptance Rates

    DIDC '14 Paper Acceptance Rate 7 of 12 submissions, 58%;
    Overall Acceptance Rate 7 of 12 submissions, 58%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)3
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)A Preliminary Empirical Study of the Power Efficiency of Matrix MultiplicationElectronics10.3390/electronics1207159912:7(1599)Online publication date: 29-Mar-2023
    • (2020)SeeSAw: Optimizing Performance of In-Situ Analytics Applications under Power Constraints2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)10.1109/IPDPS47924.2020.00086(789-798)Online publication date: May-2020
    • (2019)Investigating power efficiency of mergesortThe Journal of Supercomputing10.1007/s11227-019-02850-5Online publication date: 11-Apr-2019
    • (2017)On the power characteristics of mergesort: An empirical study2017 Intl Conf on Advanced Control Circuits Systems (ACCS) Systems & 2017 Intl Conf on New Paradigms in Electronics & Information Technology (PEIT)10.1109/ACCS-PEIT.2017.8303038(172-178)Online publication date: Nov-2017
    • (2016)On the energy footprint of I/O management in Exascale HPC systemsFuture Generation Computer Systems10.1016/j.future.2016.03.00262:C(17-28)Online publication date: 1-Sep-2016
    • (2015)Lessons Learned from Building In Situ Coupling FrameworksProceedings of the First Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization10.1145/2828612.2828622(19-24)Online publication date: 15-Nov-2015

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media