skip to main content
10.1145/2486001.2491722acmconferencesArticle/Chapter ViewAbstractPublication PagescommConference Proceedingsconference-collections
poster
Free access

R2D2: bufferless, switchless data center networks using commodity ethernet hardware

Published: 27 August 2013 Publication History

Abstract

Modern data centers commonly run distributed applications that require low-latency communication, and whose performance is critical to service revenue. If as little as one machine in 10,000 is a latency outlier, around 18% of requests will experience high latency. The sacrifice of latency determinism for bandwidth, however, is not an inevitable one. In our R2D2 architecture, we conceptually split the data centre network into an unbuffered, unswitched low-latency network (LLNet) and a deeply buffered bandwidth centric network (BBNet). Through explicitly scheduling network multiplexing in software, our prototype implementation achieves 99.995% and 99.999% messaging latencies of 35us and 75us respectively for 1514-byte packets on a fully loaded network. Furthermore, we show that it is possible to merge the conceptually separate LLNet and BBNet networks onto the same physical infrastructure using commodity switched Ethernet hardware.

References

[1]
Alizadeh, M., et al. Data center TCP (DCTCP). In Proceedings of SIGCOMM (2010), pp. 63--74.
[2]
Alizadeh, M., et al. Deconstructing datacenter packet transport. In Proceedings of HotNets (2012), pp. 133--138.
[3]
Alizadeh, M., et al. Less is more: trading a little bandwidth for ultra-low latency in the data center. In Proceedings of NSDI (2012).
[4]
Arista Networks. 7150 Series 10G data sheet. http://bit.ly/14rK1Nq.
[5]
Chen, Y., et al. Understanding TCP incast throughput collapse in datacenter networks. In Proceedings of WREN (2009), pp. 73--82.
[6]
Cheshire, S. It's the Latency, Stupid. http://rescomp.stanford.edu/ cheshire/rants/Latency.html; accessed 14/01/2013.
[7]
Costa, P., et al. CamCube: A Key-based Data Center. Tech. rep., MSR TR-2010--74, Microsoft Research, 2010.
[8]
Dean, J. and Barroso, L. The Tail at Scale: Managing Latency Variability in Large-Scale Online Services. Communications of the ACM (feb 2013).
[9]
Guo, C., et al. Bcube: a high performance, server-centric network architecture for modular data centers. In Proceedings of SIGCOMM (2009), pp. 63--74.
[10]
Hong, C., et al. Finishing flows quickly with preemptive scheduling. In Proceedings of SIGCOMM (2012), pp. 127--138.
[11]
Vamanan, B., et al. Deadline-aware datacenter tcp (D\textsuperscript2TCP). SIGCOMM CCR (Aug. 2012), 115--126.
[12]
Wilson, C., et al. Better never than late: meeting deadlines in datacenter networks. In Proceedings of SIGCOMM (2011), pp. 50--61.

Cited By

View all
  • (2017)Reducing Response Time with Preheated CachesEuro-Par 2016: Parallel Processing Workshops10.1007/978-3-319-58943-5_57(709-720)Online publication date: 28-May-2017
  • (2017)Reducing Response Time with Preheated CachesEuro-Par 2016: Parallel Processing Workshops10.1007/978-3-319-58943-5_57(709-720)Online publication date: 28-May-2017

Index Terms

  1. R2D2: bufferless, switchless data center networks using commodity ethernet hardware

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGCOMM '13: Proceedings of the ACM SIGCOMM 2013 conference on SIGCOMM
      August 2013
      580 pages
      ISBN:9781450320566
      DOI:10.1145/2486001
      • cover image ACM SIGCOMM Computer Communication Review
        ACM SIGCOMM Computer Communication Review  Volume 43, Issue 4
        October 2013
        595 pages
        ISSN:0146-4833
        DOI:10.1145/2534169
        Issue’s Table of Contents
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 August 2013

      Check for updates

      Author Tags

      1. broadcast
      2. data centers
      3. ethernet
      4. latency
      5. scheduling

      Qualifiers

      • Poster

      Conference

      SIGCOMM'13
      Sponsor:
      SIGCOMM'13: ACM SIGCOMM 2013 Conference
      August 12 - 16, 2013
      Hong Kong, China

      Acceptance Rates

      SIGCOMM '13 Paper Acceptance Rate 38 of 246 submissions, 15%;
      Overall Acceptance Rate 462 of 3,389 submissions, 14%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)201
      • Downloads (Last 6 weeks)17
      Reflects downloads up to 20 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2017)Reducing Response Time with Preheated CachesEuro-Par 2016: Parallel Processing Workshops10.1007/978-3-319-58943-5_57(709-720)Online publication date: 28-May-2017
      • (2017)Reducing Response Time with Preheated CachesEuro-Par 2016: Parallel Processing Workshops10.1007/978-3-319-58943-5_57(709-720)Online publication date: 28-May-2017

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media