skip to main content
10.1145/1054943.1054952acmotherconferencesArticle/Chapter ViewAbstractPublication PageswmpiConference Proceedingsconference-collections
Article

Scalable cache memory design for large-scale SMT architectures

Published:20 June 2004Publication History

ABSTRACT

The cache hierarchy design in existing SMT and superscalar processors is optimized for latency, but not for band-width. The size of the L1 data cache did not scale over the past decade. Instead, larger unified L2 and L3 caches were introduced. This cache hierarchy has a high overhead due to the principle of containment. It also has a complex design to maintain cache coherence across all levels. Furthermore, this cache hierarchy is not suitable for future large-scale SMT processors, which will demand high bandwidth instruction and data caches with a large number of ports.This paper suggests the elimination of the cache hierarchy and replacing it with one-level caches for instruction and data. Multiple instruction caches can be used in parallel to scale the instruction fetch bandwidth and the overall cache capacity. A one-level data cache can be split into a number of block-interleaved cache banks to serve multiple memory requests in parallel. An interconnect is used to connect the data cache ports to the different cache banks, thus increasing the data cache access time. This paper shows that large-scale SMTs can tolerate long data cache hit times. It also shows that small line buffers can enhance the performance and reduce the required number of ports to the banked data cache memory.

References

  1. T. Austin and G. Sohi, "High-Bandwidth Address Translation for Multiple-Issue Processors", Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996, pages 147--157. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. J. Edmondson et al, "Internal Organization of the Alpha 21164 a 300-MHz 64-bit Quad-issue CMOS RISC Microprocessor", Digital Technical Journal, Special 10th Anniversary Issue, Vol. 7, No. 1, 1995, pages 119--135. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. "Hyper-Threading Technology", Intel Technical Journal, vol. 6, no. 1, February 2002.Google ScholarGoogle Scholar
  4. T. Juan, J. Navarro, and O. Temam, "Data Caches for Superscalar Processors", Proceedings of the 11th International Conference on Supercomputing, July 1997, pages 60--67. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. R. Kessler, "The Alpha 21264 Microprocessor", IEEE Micro, March-April 1999, pages 24--36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Kim, D. Burger, and S. Keckler, "An Adaptive, Non-Uniform Cache Structure for Wire-Dominated On-Chip Caches", Proceedings of the 10th International Conference on Architectural Support for programming Languages and Operating Systems, October 2002, pages 211--222. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. Koufaty and D. Marr, "Hyperthreading Technology in the Netburst Microarchitecture", IEEE Micro, March-April 2003, pages 56--65. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. L. Li, N. Vijaykrishnan, M. Kandemir, M. J. Irwin and I. Kadayif, "CCC: Crossbar Connected Caches for Reducing Energy Consumption of On-Chip Multiprocessors", Proceedings of the Euromicro Symposium on Digital System Design, September 2003, pages 41--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. Lo, S. Eggers, J. Emer, H. Levy, R. Stamm, and D. Tullsen, "Converting Thread-Level Parallelism to Instruction-Level Parallelism via Simultaneous Multithreading", ACM Transactions on Computer Systems, vol.15, no.3, August 1997, pages 322--354. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. D. Powell, A. Agarwal, T. N. Vijaykumar, B. Falsafi, and K. Roy, "Reducing Set-Associative Cache Energy via Way-Prediction and Selective Direct-Mapping", Proceedings of the 34th Annual International Symposium on Microarchitecture, December 2001, Austin, Texas, pages 54--65. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. A. Rivers, G. S. Tyson, E. S. Davidson, and T. M. Austin "On High-Bandwidth Data Cache Design for Multi-Issue Processors", Proceedings of the 30th Annual International Symposium on Microarchitecture, December 1997, pages 46--56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Tseng and K. Asanovic, "Banked Multiported Register Files for High-Frequency Superscalar Microprocessors", Proceedings of the 30th Annual International Symposium on Computer Architecture, June 2003, pages 62--71. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. D. Tullsen and J. Brown, "Handling Long-Latency Loads in a Simultaneous Multithreading Processor", Proceedings of the 34th Annual International Symposium on Microarchitecture, December 2001, pages 318--327. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. D. Tullsen, S. Eggers, J. Emer, H. Levy, J. Lo, and R. Stamm, "Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor", Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996, pages 191--202. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. D. Tullsen, S. Eggers, H. Levy, "Simultaneous Mutithreading: Maximizing On-Chip Parallelism", Proceedings of the 22nd Annual International Symposium on Computer Architecture, June 1995, pages 392--403. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. K. Wilson, K. Olukotun, and M. Rosenblum, "Increasing Cache Port Efficiency for Dynamic Superscalar Processors", Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996, pages 147--157. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. K. Wilson and K. Olukotun, "Designing High Bandwidth On-Chip Caches", Proceedings of the 24th Annual International Symposium on Computer Architecture, June 1997, pages 121--132. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    WMPI '04: Proceedings of the 3rd workshop on Memory performance issues: in conjunction with the 31st international symposium on computer architecture
    June 2004
    146 pages
    ISBN:159593040X
    DOI:10.1145/1054943

    Copyright © 2004 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 20 June 2004

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • Article

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader