skip to main content
10.1145/2668930.2688819acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article
Free Access

How to Build a Benchmark

Published:31 January 2015Publication History

ABSTRACT

Standardized benchmarks have become widely accepted tools for the comparison of products and evaluation of methodologies. These benchmarks are created by consortia like SPEC and TPC under confidentiality agreements which provide little opportunity for outside observers to get a look at the processes and concerns that are prevalent in benchmark development. This paper introduces the primary concerns of benchmark development from the perspectives of SPEC and TPC committees. We provide a benchmark definition, outline the types of benchmarks, and explain the characteristics of a good benchmark. We focus on the characteristics important for a standardized benchmark, as created by the SPEC and TPC consortia. To this end, we specify the primary criteria to be employed for benchmark design and workload selection. We use multiple standardized benchmarks as examples to demonstrate how these criteria are ensured.

References

  1. R. García-Castro and A. Gómez-Pérez. Benchmark Suites for Improving the RDF(S) Importers and Exporters of Ontology Development Tools. In Y. Sure and J. Domingue, editors, The Semantic Web: Research and Applications, volume 4011 of Lecture Notes in Computer Science, pages 155--169. Springer Berlin Heidelberg, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. J. Gustafson and Q. Snell. HINT: A new way to measure computer performance. In System Sciences, 1995. Proceedings of the Twenty-Eighth Hawaii International Conference on, volume 2, pages 392--401 vol.2, Jan 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. J. L. Henning. SPEC CPU2000: measuring CPU performance in the New Millennium. Computer, 33(7):28--35, Jul 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. K. Huppler. The Art of Building a Good Benchmark. In R. Nambiar and M. Poess, editors, Performance Evaluation and Benchmarking, volume 5895 of Lecture Notes in Computer Science, pages 18--30. Springer Berlin Heidelberg, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. K. Huppler and D. Johnson. TPC Express - A New Path for TPC Benchmarks. In R. Nambiar and M. Poess, editors, Performance Characterization and Benchmarking, volume 8391 of Lecture Notes in Computer Science, pages 48--60. Springer International Publishing, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. K. Sachs. Performance Modeling and Benchmarking of Event-Based Systems. PhD thesis, TU Darmstadt, 2010. SPEC Distinguished Dissertation Award 2011.Google ScholarGoogle Scholar
  7. S. E. Sim, S. Easterbrook, and R. C. Holt. Using Benchmarking to Advance Research: A Challenge to Software Engineering. In Proceedings of the 25th International Conference on Software Engineering, ICSE '03, pages 74--83, Washington, DC, USA, 2003. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. K. Skadron, M. Martonosi, D. I. August, M. D. Hill, D. J. Lilja, and V. S. Pai. Challenges in Computer Architecture Evaluation. Computer, 36(8):30--36, Aug. 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Standard Performance Evaluation Corporation. SPEC fair use rule. http://www.spec.org/fairuse.html.Google ScholarGoogle Scholar
  10. Standard Performance Evaluation Corporation. SPEC Power and Performance Benchmark Methodology. http://spec.org/power/docs/SPEC-Power_and_Performance_Methodology.pdf.Google ScholarGoogle Scholar
  11. F. Stefani, A. Moschitta, D. Macii, and D. Petri. FFT benchmarking for digital signal processing technologies. In 17th IMEKO World Congress, 2003.Google ScholarGoogle Scholar
  12. M. Vieira, H. Madeira, K. Sachs, and S. Kounev. Resilience Benchmarking. In K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, Resilience Assessment and Evaluation of Computing Systems, XVIII. Springer-Verlag, Berlin, Heidelberg, 2012. ISBN: 978-3-642-29031-2.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. How to Build a Benchmark

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICPE '15: Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering
      January 2015
      366 pages
      ISBN:9781450332484
      DOI:10.1145/2668930

      Copyright © 2015 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 31 January 2015

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      ICPE '15 Paper Acceptance Rate23of74submissions,31%Overall Acceptance Rate252of851submissions,30%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader