ABSTRACT
Modern architecture research relies heavily on application-level detailed pipeline simulation. A time consuming part of building a simulator is correctly emulating the operating system effects, which is required even if the goal is to simulate just the application code, in order to achieve functional correctness of the application's execution. Existing application-level simulators require manually hand coding the emulation of each and every possible system effect (e.g., system call, interrupt, DMA transfer) that can impact the application's execution. Developing such an emulator for a given operating system is a tedious exercise, and it can also be costly to maintain it to support newer versions of that operating system. Furthermore, porting the emulator to a completely different operating system might involve building it all together from scratch.In this paper, we describe a tool that can automatically log operating system effects to guide architecture simulation of application code. The benefits of our approach are: (a) we do not have to build or maintain any infrastructure for emulating the operating system effects, (b) we can support simulation of more complex applications on our application-level simulator, including those applications that use asynchronous interrupts, DMA transfers, etc., and (c) using the system effects logs collected by our tool, we can deterministically re-execute the application to guide architecture simulation that has reproducible results.
- R. Bhargava, J. Rubio, S. Kannan, L. K. John, D. Christie, and L. Klaes. Understanding the iimpact of x86/nt computing on microarchitecture. In Chapter 10. Workload characterization of emerging computer applications. Kluwer Academic Publishers 2001. Google ScholarDigital Library
- M. Van Biesbrouck, L. Eeckhout, and B. Calder. Efficient sampling startup for sampled processor simulation. In International Conference on High Performance Embedded Architectures and Compilers November 2005. Google ScholarDigital Library
- Greg Bronevetsky, Daniel Marques, Keshav Pingali, Peter K. Szwed, and Martin Schulz. Application-level checkpointing for shared memory programs. In Proceedings of the Symposium on Architectural Support for Programming Languages and Operating Systems pages 235--247, 2004. Google ScholarDigital Library
- D. C. BurgerandT. M. Austin. TheSimpleScalartoolset, version 2. 0. Technical Report CS-TR-97-1342, University of Wisconsin, Madison, June 1997.Google Scholar
- H. Cain, K. Lepak, B. Schwartz, and M. Lipasti. Precise and accurate processor simulation. In In Proceedings of the Fifth Workshop on Computer Architecture Evaluation Using Commercial Workloads (CAECW)2002.Google Scholar
- The Transaction Processing Performance Council. Tpc benchmark c: Standard specification. http: //www.tpc.org/tpcc/spec/tpcc current.pdf Dec 2003.Google Scholar
- J. Emer, P. Ahuja, E. Borch, A. Klauser, C. K. Luk, S. Manne, S. S. Mukherjee, H. Patil, S. Wallace, N. Binkert, R. Espasa, and T. Juan. Asim: A performance model framework. Computer 35(2): 68--76, 2002. Google ScholarDigital Library
- J. Lau, J. Sampson, E. Perelman, G. Hamerly, and B. Calder. The strong correlation between code signatures and performance. In ISPASS March 2005. Google ScholarDigital Library
- C. K Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V. J. Reddi, and K. Hazelwood. Pin: Building customized program analysis tools with dynamic instrumentation. In Programming Language Design and Implementation Chicago, IL, June 2005. Google ScholarDigital Library
- P. S. Magnusson, M. Christensson, J. Eskilson, D. Forsgren, G. Hllberg, J. Hgberg, F. Larsson, A. Moestedt, and B. Werner. Simics: A full system simulation platform. Computer 35(2): 50--58, 2002. Google ScholarDigital Library
- C. J. Mauer, M. D. Hill, and D. A. Wood. Full-system timing-first simulation. SIGMETRICS Perform. Eval. Rev. 30(1): 108--116, 2002. Google ScholarDigital Library
- S. Narayanasamy, G. Pokam, and B. Calder. Bugnet: Continuously recording program execution for deterministic replay debugging. In ISCA June 2005. Google ScholarDigital Library
- H. Patil, R. Cohn, M. Charney, R. Kapoor, A. Sun, and A. Karunanidhi. Pinpointing representative portions of large Intel Itanium programs with dynamic instrumentation. In MICRO-37 December 2004. Google ScholarDigital Library
- J. Ringenberg, C. Pelosi, D. Oehmke, and T. Mudge. Intrinsic checkpointing: A methodology for decreasing simulation time through binary modification. In ISPASS'05 March 2005. Google ScholarDigital Library
- M. Rosenblum, E. Bugnion, S. Devine, and S. A. Herrod. Using the simos machine simulator to study complex computer systems. Modeling and Computer Simulation 7(1): 78--103, 1997. Google ScholarDigital Library
- T. Sherwood, E. Perelman, G. Hamerly, and B. Calder. Automatically characterizing large scale program behavior. In ASPLOS-X October 2002. Google ScholarDigital Library
- R. Singhal, K. S. Venkatraman, E. Cohn, J. G. Holm, D. Koufaty, M. J. Lin, M. Madhav, M. Mattwandel, N. Nidhi, J. Pearce, and M. Seshadri. Performance analysis and validation of the intel pentium 4 processor on 90nm technology. In Intel Technology Journal February 2004.Google Scholar
- P. K. Szwed, D. Marques, R. M. Buels, S. A. McKee, and M. Schulz. Simsnap: Fast-forwarding via native execution and application-level checkpointing. In Proc. HPCA 2004 Interact-8: Workshop on the Interaction between Compilers and Computer Architectures February 2004.Google ScholarCross Ref
- D. M. Tullsen, S. J. Eggers, J. S. Emer, H. M. Levy, J. L. Lo, and R. L. Stamm. Exploiting choice: Instruction fetch and issue on an implementable simultaneous multithreading processor. In ISCA pages 191--202, 1996. Google ScholarDigital Library
- R. Uhlig, R. Fishtein, O. Gershon, I. Hirsh, and H. Wang. Softsdv: A pre-silicon software development environment for the ia-64 architecture. In Intel Technology Journal December 1999.Google Scholar
- Roland E. Wunderlich, Thomas F Wenisch, Babak Falsafi, and James C. Hoe. SMARTS: Accelerating microarchitecture simulation via rigorous statistical sampling. In ISCA-30 June 2003. Google ScholarDigital Library
- J. J. Yi, S. V. Kodakara, R. Sendag, D. J. Lilja, and D. M. Hawkins. Characterizing and comparing prevailing simulation techniques. In HPCA-11 February 2005. Google ScholarDigital Library
Index Terms
- Automatic logging of operating system effects to guide application-level architecture simulation
Recommendations
Automatic logging of operating system effects to guide application-level architecture simulation
Performance evaluation reviewModern architecture research relies heavily on application-level detailed pipeline simulation. A time consuming part of building a simulator is correctly emulating the operating system effects, which is required even if the goal is to simulate just the ...
Comments