skip to main content
10.1145/1140277.1140303acmconferencesArticle/Chapter ViewAbstractPublication PagesmetricsConference Proceedingsconference-collections
Article

Automatic logging of operating system effects to guide application-level architecture simulation

Published:26 June 2006Publication History

ABSTRACT

Modern architecture research relies heavily on application-level detailed pipeline simulation. A time consuming part of building a simulator is correctly emulating the operating system effects, which is required even if the goal is to simulate just the application code, in order to achieve functional correctness of the application's execution. Existing application-level simulators require manually hand coding the emulation of each and every possible system effect (e.g., system call, interrupt, DMA transfer) that can impact the application's execution. Developing such an emulator for a given operating system is a tedious exercise, and it can also be costly to maintain it to support newer versions of that operating system. Furthermore, porting the emulator to a completely different operating system might involve building it all together from scratch.In this paper, we describe a tool that can automatically log operating system effects to guide architecture simulation of application code. The benefits of our approach are: (a) we do not have to build or maintain any infrastructure for emulating the operating system effects, (b) we can support simulation of more complex applications on our application-level simulator, including those applications that use asynchronous interrupts, DMA transfers, etc., and (c) using the system effects logs collected by our tool, we can deterministically re-execute the application to guide architecture simulation that has reproducible results.

References

  1. R. Bhargava, J. Rubio, S. Kannan, L. K. John, D. Christie, and L. Klaes. Understanding the iimpact of x86/nt computing on microarchitecture. In Chapter 10. Workload characterization of emerging computer applications. Kluwer Academic Publishers 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. M. Van Biesbrouck, L. Eeckhout, and B. Calder. Efficient sampling startup for sampled processor simulation. In International Conference on High Performance Embedded Architectures and Compilers November 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Greg Bronevetsky, Daniel Marques, Keshav Pingali, Peter K. Szwed, and Martin Schulz. Application-level checkpointing for shared memory programs. In Proceedings of the Symposium on Architectural Support for Programming Languages and Operating Systems pages 235--247, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. D. C. BurgerandT. M. Austin. TheSimpleScalartoolset, version 2. 0. Technical Report CS-TR-97-1342, University of Wisconsin, Madison, June 1997.Google ScholarGoogle Scholar
  5. H. Cain, K. Lepak, B. Schwartz, and M. Lipasti. Precise and accurate processor simulation. In In Proceedings of the Fifth Workshop on Computer Architecture Evaluation Using Commercial Workloads (CAECW)2002.Google ScholarGoogle Scholar
  6. The Transaction Processing Performance Council. Tpc benchmark c: Standard specification. http: //www.tpc.org/tpcc/spec/tpcc current.pdf Dec 2003.Google ScholarGoogle Scholar
  7. J. Emer, P. Ahuja, E. Borch, A. Klauser, C. K. Luk, S. Manne, S. S. Mukherjee, H. Patil, S. Wallace, N. Binkert, R. Espasa, and T. Juan. Asim: A performance model framework. Computer 35(2): 68--76, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Lau, J. Sampson, E. Perelman, G. Hamerly, and B. Calder. The strong correlation between code signatures and performance. In ISPASS March 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. C. K Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V. J. Reddi, and K. Hazelwood. Pin: Building customized program analysis tools with dynamic instrumentation. In Programming Language Design and Implementation Chicago, IL, June 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. S. Magnusson, M. Christensson, J. Eskilson, D. Forsgren, G. Hllberg, J. Hgberg, F. Larsson, A. Moestedt, and B. Werner. Simics: A full system simulation platform. Computer 35(2): 50--58, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. C. J. Mauer, M. D. Hill, and D. A. Wood. Full-system timing-first simulation. SIGMETRICS Perform. Eval. Rev. 30(1): 108--116, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. S. Narayanasamy, G. Pokam, and B. Calder. Bugnet: Continuously recording program execution for deterministic replay debugging. In ISCA June 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. H. Patil, R. Cohn, M. Charney, R. Kapoor, A. Sun, and A. Karunanidhi. Pinpointing representative portions of large Intel Itanium programs with dynamic instrumentation. In MICRO-37 December 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Ringenberg, C. Pelosi, D. Oehmke, and T. Mudge. Intrinsic checkpointing: A methodology for decreasing simulation time through binary modification. In ISPASS'05 March 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Rosenblum, E. Bugnion, S. Devine, and S. A. Herrod. Using the simos machine simulator to study complex computer systems. Modeling and Computer Simulation 7(1): 78--103, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. T. Sherwood, E. Perelman, G. Hamerly, and B. Calder. Automatically characterizing large scale program behavior. In ASPLOS-X October 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. R. Singhal, K. S. Venkatraman, E. Cohn, J. G. Holm, D. Koufaty, M. J. Lin, M. Madhav, M. Mattwandel, N. Nidhi, J. Pearce, and M. Seshadri. Performance analysis and validation of the intel pentium 4 processor on 90nm technology. In Intel Technology Journal February 2004.Google ScholarGoogle Scholar
  18. P. K. Szwed, D. Marques, R. M. Buels, S. A. McKee, and M. Schulz. Simsnap: Fast-forwarding via native execution and application-level checkpointing. In Proc. HPCA 2004 Interact-8: Workshop on the Interaction between Compilers and Computer Architectures February 2004.Google ScholarGoogle ScholarCross RefCross Ref
  19. D. M. Tullsen, S. J. Eggers, J. S. Emer, H. M. Levy, J. L. Lo, and R. L. Stamm. Exploiting choice: Instruction fetch and issue on an implementable simultaneous multithreading processor. In ISCA pages 191--202, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. R. Uhlig, R. Fishtein, O. Gershon, I. Hirsh, and H. Wang. Softsdv: A pre-silicon software development environment for the ia-64 architecture. In Intel Technology Journal December 1999.Google ScholarGoogle Scholar
  21. Roland E. Wunderlich, Thomas F Wenisch, Babak Falsafi, and James C. Hoe. SMARTS: Accelerating microarchitecture simulation via rigorous statistical sampling. In ISCA-30 June 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. J. Yi, S. V. Kodakara, R. Sendag, D. J. Lilja, and D. M. Hawkins. Characterizing and comparing prevailing simulation techniques. In HPCA-11 February 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Automatic logging of operating system effects to guide application-level architecture simulation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGMETRICS '06/Performance '06: Proceedings of the joint international conference on Measurement and modeling of computer systems
      June 2006
      404 pages
      ISBN:1595933190
      DOI:10.1145/1140277
      • cover image ACM SIGMETRICS Performance Evaluation Review
        ACM SIGMETRICS Performance Evaluation Review  Volume 34, Issue 1
        Performance evaluation review
        June 2006
        388 pages
        ISSN:0163-5999
        DOI:10.1145/1140103
        Issue’s Table of Contents

      Copyright © 2006 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 26 June 2006

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate459of2,691submissions,17%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader