skip to main content
10.1145/2465529.2465763acmconferencesArticle/Chapter ViewAbstractPublication PagesmetricsConference Proceedingsconference-collections
research-article

Defragmenting the cloud using demand-based resource allocation

Published:17 June 2013Publication History

ABSTRACT

Current public cloud offerings sell capacity in the form of pre-defined virtual machine (VM) configurations to their tenants. Typically this means that tenants must purchase individual VM configurations based on the peak demands of the applications, or be restricted to only scale-out applications that can share a pool of VMs. This diminishes the value proposition of moving to a public cloud as compared to server consolidation in a private virtualized datacenter, where one gets the benefits of statistical multiplexing between VMs belonging to the same or different applications. Ideally one would like to enable a cloud tenant to buy capacity in bulk and benefit from statistical multiplexing among its workloads. This requires the purchased capacity to be dynamically and transparently allocated among the tenant's VMs that may be running on different servers, even across datacenters. In this paper, we propose two novel algorithms called BPX and DBS that are able to provide the cloud customer with the abstraction of buying bulk capacity. These algorithms dynamically allocate the bulk capacity purchased by a customer between its VMs based on their individual demands and user-set importance. Our algorithms are highly scalable and are designed to work in a large-scale distributed environment. We implemented a prototype of BPX as part of VMware's management software and showed that BPX is able to closely mimic the behavior of a centralized allocator in a distributed manner.

References

  1. Linux containers (LXC) overview document. http://lxc.sourceforge.net/lxc.html.Google ScholarGoogle Scholar
  2. Solaris Resource Management. http://docs.sun.com/app/docs/doc/817-1592.Google ScholarGoogle Scholar
  3. B. Agrawal, L. Spracklen, S. Satnur, and R.Bidarkar. Vmware view 5.0 performance and best practices. 2011. http://www.vmware.com/files/pdf/view/VMware-View-Performance-Study-Best-Practices-Technical-White-Paper.pdf.Google ScholarGoogle Scholar
  4. D. Ardagna, M. Trubian, and L. Zhang. SLA based resource allocation policies in autonomic environments.J. Parallel Distrib. Comput., 67(3):259--270, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. G. Banga, P. Druschel, and J. C. Mogul. Resource containers: a new facility for resource management in server systems. In OSDI '99. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. G. Casale, N. Mi, L. Cherkasova, and E. Smirni. How to parameterize models with bursty workloads. SIGMETRICS Perform. Eval. Rev., 36(2):38--44, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. L. Cherkasova and J. A. Rolia. R-opus: A composite framework for application performability and qos in shared resource pools. In DSN, pages 526--535, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. D. Gmach, J. Rolia, and L. Cherkasova. Satisfying service level objectices in a self-managing resource pool. In SASO, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. D. Gmach, J. Rolia, and L. Cherkasova. Selling t-shirts and time shares in the cloud. In CCGRID, pages 539--546, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. D. Gmach, J. Rolia, L. Cherkasova, G. Belrose, T. Turicchi, and A. Kemper. An integrated approach to resource pool management: Policies, efficiency and quality metrics. In DSN, pages 326--335, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  11. D. Gmach, J. Rolia, L. Cherkasova, and A. Kemper. Capacity management and demand prediction for next generation data centers. In ICWS, pages 43--50, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  12. A. Gulati, A. Holler, M. Ji, G. Shanmuganathan, C. Waldspurger, and X. Zhu. VMware Distributed Resource Management: Design, Implementation, and Lessons Learned. In VMware Technical Journal, March 2012.Google ScholarGoogle Scholar
  13. B. Hindman, A. Konwinski, M. Zaharia, A. Ghodsi, A. D. Joseph, R. Katz, S. Shenker, and I. Stoica. Mesos: a platform for fine-grained resource sharing in the data center. In Proceedings of the 8th USENIX conference on Networked systems design and implementation, NSDI'11, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. X. Meng, C. Isci, J. O. Kephart, L. Zhang, E. Bouillet, and D. E. Pendarakis. Efficient resource provisioning in compute clouds via vm multiplexing. In ICAC, pages 11--20, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Microsoft, Inc. Microsoft Hyper-V Server. 2012. http://www.microsoft.com/en-us/server-cloud/hyper-v-server/default.aspx%.Google ScholarGoogle Scholar
  16. Nebula, Inc. 2012. http://www.nebula.com/.Google ScholarGoogle Scholar
  17. Nimbula, Inc. 2012. http://www.nimbula.com/.Google ScholarGoogle Scholar
  18. D. Niu, C. Feng, and B. Li. Pricing cloud bandwidth reservations under demand uncertainty. In SIGMETRICS, pages 151--162, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. L. Spracklen, B. Agrawal, R.Bidarkar, and H. Sivaraman. Comprehensive user experience monitoring. March 2011. VMware Technical Journal.Google ScholarGoogle Scholar
  20. C. Stewart and K. Shen. Performance modeling and system management for multi-component online services. In NSDI, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. J. Tan, P. Dube, X. Meng, and L. Zhang. Exploiting resource usage patterns for better utilization prediction. In ICDCS Workshops, pages 14--19, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Y. Tan, Y. Lu, and C. H. Xia. Provisioning for large scale cloud computing services. In SIGMETRICS, pages 407--408, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. B. Urgaonkar, G. Pacifici, P. J. Shenoy, M. Spreitzer, and A. N. Tantawi. An analytical model for multi-tier internet services and its applications. In SIGMETRICS, pages 291--302, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. B. Urgaonkar, A. L. Rosenberg, and P. J. Shenoy. Application placement on a cluster of servers. Int. J. Found. Comput. Sci., 18(5), 2007.Google ScholarGoogle ScholarCross RefCross Ref
  25. B. Urgaonkar, P. J. Shenoy, and T. Roscoe. Resource overbooking and application profiling in shared hosting platforms. In OSDI, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. VMware Big Data team. 2012. http://www.vmware.com/hadoop/serengeti.html.Google ScholarGoogle Scholar
  27. VMware, Inc. VMware vCloud Suite. 2012. http://www.vmware.com/products/datacenter-virtualization/vcloud-suite/o%verview.html.Google ScholarGoogle Scholar
  28. C. A. Waldspurger. Memory Resource Management in VMware ESX Server. In USENIX OSDI '02. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. H. Wang, K. Doshi, and P. Varman. Nested QoS: Adaptive burst decomposition for SLO guarantees in virtualized servers. Intel Technology Journal, 16:156--181, June 2012.Google ScholarGoogle Scholar
  30. K. Wang, M. Lin, F. Ciucu, A. Wierman, and C. Lin. Characterizing the impact of the workload on the value of dynamic resizing in data centers. In SIGMETRICS, pages 405--406, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. M. Wang, X. Meng, and L. Zhang. Consolidating virtual machines with dynamic bandwidth demand in data centers. In INFOCOM, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  32. W. Wang, B. Li, and B. Liang. Towards optimal capacity segmentation with hybrid cloud pricing. In ICDCS, pages 425--434, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. T. Wood, L. Cherkasova, K. M. Ozonat, and P. J. Shenoy. Profiling and modeling resource usage of virtualized applications. In Middleware, pages 366--387, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Q. Zhang, L. Cherkasova, G. Mathews, W. Greene, and E. Smirni. R-capriccio: A capacity planning and anomaly detection tool for enterprise services with live workloads. In Middleware, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Defragmenting the cloud using demand-based resource allocation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGMETRICS '13: Proceedings of the ACM SIGMETRICS/international conference on Measurement and modeling of computer systems
          June 2013
          406 pages
          ISBN:9781450319003
          DOI:10.1145/2465529
          • cover image ACM SIGMETRICS Performance Evaluation Review
            ACM SIGMETRICS Performance Evaluation Review  Volume 41, Issue 1
            Performance evaluation review
            June 2013
            385 pages
            ISSN:0163-5999
            DOI:10.1145/2494232
            Issue’s Table of Contents

          Copyright © 2013 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 17 June 2013

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          SIGMETRICS '13 Paper Acceptance Rate54of196submissions,28%Overall Acceptance Rate459of2,691submissions,17%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader