skip to main content
10.1145/1712605.1712654acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
tutorial

Automatic generation of benchmark and test workloads

Published:28 January 2010Publication History

ABSTRACT

In this tutorial, we describe techniques for automatic generation of benchmark and test workloads. Generated programs have adjustable parameters that are used to select the program size and structure, as well as the relative frequencies of basic operations (or program modules) that characterize the workload.

References

  1. Asanovic, K. et al., A View of the Parallel Computing Landscape. CACM, Vol. 52, No. 10, pp. 56--67, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Dujmovic, J.J., Evaluation and Design of Benchmark Suites. Chapter 12 in Advanced Computer Performance Modeling and Simulation, Edited by K. Bagchi, J. Walrand, and G.W. Zobrist, Gordon and Breach, 1998, pp. 278--323.Google ScholarGoogle Scholar
  3. Dujmovic, J.J., Universal Benchmark Suites -- A Quantitative Approach to Benchmark Design. Performance Evaluation and Benchmarking with Realistic Applications, Edited by Rudolf Eigenmann, MIT Press, pp. 257--287, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Dujmovic, J.J and I. Dujmović, Evolution and Evaluation of SPEC Benchmarks. Performance Evaluation Review, Vol. 26, No. 3, pp. 2--9, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Dujmovic, J.J., E. Horvath, and H. Lew, Benchmark Program Generator for Compiler Performance Analysis. CMG99 Proceedings, Vol. 2, pp.838--847, 1999.Google ScholarGoogle Scholar
  6. Dujmovic, J.J. and Howard Lew, A Method for Generating Benchmark Programs. CMG 2000 Proceedings, Vol. 1, pp. 379--388, 2000.Google ScholarGoogle Scholar
  7. Dujmovic, J.J. and Murat Cengiz, A Kernel Library for Benchmark Program Generators. CMG 2003 Proceedings, Vol. 2 pp. 609--618, 2003.Google ScholarGoogle Scholar
  8. Gray, J. What Next? A Dozen Information-Technology Research Goals. Microsoft Research Technical Report MS-TR-99-50, 1999.Google ScholarGoogle Scholar
  9. Herder, C. and J.J. Dujmovic, Workload Characterization Using Metrics Based on Instruction Grouping. International Journal of Computer and Information Sciences, Vol. 5, No. 1, March 2004.Google ScholarGoogle Scholar
  10. Jain, R., The Art of Computer Systems Performance Analysis. John Wiley, 1991.Google ScholarGoogle Scholar
  11. Kubiatowicz, J., Introduction to Parallel Architectures. http://parlab.eecs.berkeley.edu/bootcampagenda, 2009Google ScholarGoogle Scholar
  12. Lew, H. and J.J. Dujmovic, Performance Evaluation and Comparison of C++ Compilers. CMG 2000 Proceedings, Vol. 1, pp. 241--252, 2000.Google ScholarGoogle Scholar
  13. Mirghafori, N., M. Jacoby, and D. Patterson, Truth in SPEC Benchmarks. Computer Architecture News, Vol. 23, No. 5, pp. 34--42, December 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Nelder, J.A. and Mead, R. (1965), "A simplex method for function minimization", Comput. J., 7, pp. 308--313. (See http://www.scholarpedia.org/article/Nelder-Mead_algorithm)Google ScholarGoogle Scholar
  15. SPEC, The Current Benchmarks. http://www.spec.org/osg/, 2009.Google ScholarGoogle Scholar
  16. Wikipedia, Moore's law. http://en.wikipedia.org/wiki/Moore's_lawGoogle ScholarGoogle Scholar

Index Terms

  1. Automatic generation of benchmark and test workloads

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader