skip to main content
research-article

Towards versatile performance models for complex, popular applications

Published:27 March 2010Publication History
Skip Abstract Section

Abstract

Perhaps surprisingly, no practical performance models exist for popular (and complex) client applications such as Adobe's Designer suite, Microsoft's Office suite and Visual Studio, Mozilla, Halo 3, etc. There is currently no tool that automatically answers program developers', IT administrators' and end-users' simple what-if questions like "what happens to the performance of my favorite application X if I upgrade from Windows Vista to Windows 7?". This paper describes directions we are taking for constructing practical, versatile performance models to address this problem.

The directions we have taken have two paths. The first path involves instrumenting applications better to export their state and associated metrics. This application-specific monitoring is always on and interesting data is collected from real, "in-the-wild" deployments. The second path involves statistical modeling techniques. The models we are experimenting with require no modifications to the OS or applications beyond the above instrumentation, and no explicit a priori model on how an OS or application should behave. We are in the process of learning from models we have constructed for several Microsoft products, including the Office suite, Visual Studio and Media Player. This paper presents preliminary findings from a large user deployment (several hundred thousand user sessions) of these applications that show the coverage and limitations of such models.

Early indications from this work point towards future modeling strategies based on large amounts of data collected in the field. We present our thoughts on what this could imply for the SIGMETRICS community.

References

  1. M. K. Aguilera, J. C. Mogul, J. L. Wiener, P. Reynolds, and A. Muthitacharoen. Performance debugging for distributed systems of black boxes. In ACM Symposium on Operating System Principles, pages 74--89, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Apple. Optimize with Shark: Big payoff, small effort. http://developer.apple.com/tools/.Google ScholarGoogle Scholar
  3. S. Basu, J. Dunagan, and G. Smith. Why did my PC suddenly slow down? In SYSML'07: Proceedings of the 2nd USENIX workshop on tackling computer systems problems with machine learning techniques, pages 1--6, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Bhatia, A. Kumar, M. E. Fiuczynski, and L. L. Peterson. Lightweight, high-resolution monitoring for troubleshooting production systems. In Proceedings of the 8th conference on Symposium on Opearting Systems Design & Implementation, pages 103--116, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. CART. Chapman and Hall/CRC, 1998.Google ScholarGoogle Scholar
  6. M. Y. Chen, A. Accardi, E. Kiciman, D. Patterson, A. Fox, and E. Brewer. Path-based failure and evolution management. In Symposium on Networked Systems Design and Implementation, pages 309--322, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. I. Cohen, J. S. Chase, M. Goldszmidt, T. Kelly, and J. Symons. Correlating instrumentation data to system states: a building block for automated diagnosis and control. In Symposium on Operating Systems Design and Implementation, pages 231--244, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. I. Cohen, S. Zhang, M. Goldszmidt, J. Symons, T. Kelly, and A. Fox. Capturing, indexing clustering and retreiving system history. In ACM Symposium on Operating Systems Principles, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K. Glerum, K. Kinshumann, S. Greenberg, G. Aul, V. Orgovan, G. Nichols, D. Grant, G. Loihle, and G. Hunt. Debugging in the (very) large: ten years of implementation and experience. In Proceedings of the Symposium on Operating systems principles, pages 103--116, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Ha, C. J. Rossbach, J. V. Davis, I. Roy, H. E. Ramadan, D. E. Porter, D. L. Chen, and E. Witchel. Improved error reporting for software that uses black-box components. SIGPLAN Not., 42(6):101--111, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Q. He, C. Dovrolis, and M. Ammar. On the predictability of large transfer TCP throughput. In ACM SIGCOMM Conference, pages 145--156, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. R. Jain. The art of computer systems performance analysis. John Wiley & Sons, 1991.Google ScholarGoogle Scholar
  13. E. Kiciman and B. Livshits. AjaxScope: a platform for remotely monitoring the client-side behavior of web 2.0 applications. In ACM Symposium on Operating Systems Principles, pages 17--30, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. E. Lazowska, J. Zahorjan, S. Graham, and K. Sevcik. Quantitative system performance: computer system analysis using queuing network models. Prentice Hall, 1984. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. P. Mesnier, M. Wachs, R. R. Sambasivan, A. Zheng, and G. R. Ganger. Modeling the relative fitness of storage. In ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 37--48, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Microsoft. Event tracing. http://msdn.microsoft.com/.Google ScholarGoogle Scholar
  17. Microsoft. Microsoft customer experience improvement program. http://www.microsoft.com/products/ceip.Google ScholarGoogle Scholar
  18. T. M. Mitchell. Machine Learning. McGraw-Hill, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. E. Perl and W. E. Weihl. Performance assertion checking. In ACM Symposium on Operating System Principles, pages 134--145, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. Reynolds, C. Killian, J. L. Wiener, J. C. Mogul, M. A. Shah, and A. Vahdat. Pip: Detecting the unexpected in distributed systems. In Symposium on Networked Systems Design and Implementation, pages 115--128, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. M. Seltzer, D. Krinsky, K. Smith, and X. Zhang. The case for application-specific benchmarking. In HOTOS '99: Proceedings of the The Seventh Workshop on Hot Topics in Operating Systems, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. K. Shen, M. Zhong, and C. Li. I/O system performance debugging using model-driven anomaly characterization. In Conference on File and Storage Technologies, pages 309--322, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. C. Stewart and K. Shen. Performance modeling and system management for multi-component online services. In Symposium on Networked Systems Design and Implementation, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. E. Thereska and G. R. Ganger. IRONModel: robust performance models in the wild. In ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. H. J. Wang, J. C. Platt, Y. Chen, R. Zhang, and Y.-M. Wang. Automatic misconfiguration troubleshooting with PeerPressure. In Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Towards versatile performance models for complex, popular applications

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM SIGMETRICS Performance Evaluation Review
          ACM SIGMETRICS Performance Evaluation Review  Volume 37, Issue 4
          March 2010
          87 pages
          ISSN:0163-5999
          DOI:10.1145/1773394
          Issue’s Table of Contents

          Copyright © 2010 Authors

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 27 March 2010

          Check for updates

          Qualifiers

          • research-article
        • Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader