ABSTRACT
Memory speed has become a major performance bottleneck as more and more cores are integrated on a multi-core chip. The widening latency gap between high speed cores and memory has led to the evolution of multi-level SRAM/DRAM cache hierarchies that exploit the latency benefits of smaller caches (e.g. private L1 and L2 SRAM caches) and the capacity benefits of larger caches (e.g. shared L3 SRAM and shared L4 DRAM cache). The main problem of employing large L3/L4 caches is their high tag lookup latency. To solve this problem, we introduce the novel concept of small and low latency SRAM/DRAM Tag-Cache structures that can quickly determine whether an access to the large L3/L4 caches will be a hit or a miss. The performance of the proposed Tag-Cache architecture depends upon the Tag-Cache hit rate and to improve it we propose a novel Tag-Cache insertion policy and a DRAM row buffer mapping policy that reduce the latency of memory requests. For a 16-core system, this improves the average harmonic mean instruction per cycle throughput of latency sensitive applications by 13.3% compared to state-of-the-art.
- S. Borkar and A. A. Chien, "The Future of Microprocessors", Communications of the ACM, vol. 54, no. 5, pp. 67--77, May 2011. Google ScholarDigital Library
- R. X. Arroyo, R. J. Harrington, S. P. Hartman, and T. Nguyen, "IBM POWER7 Systems", IBM Journal of Research and Development, vol. 55, no. 3, pp. 2:1--2:13, 2011. Google ScholarDigital Library
- F. Hameed, L. Bauer, and J. Henkel, "Simultaneously Optimizing DRAM Cache Hit Latency and Miss Rate via Novel Set Mapping Policies", in International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES), 2013. Google ScholarDigital Library
- G. Loh and M. Hill, "Efficiently Enabling Conventional Block Sizes for Very Large Die-stacked DRAM Caches", in 44th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2011, pp. 454--464. Google ScholarDigital Library
- G. Loh and M. Hill, "Supporting Very Large DRAM Caches with Compound Access Scheduling and MissMaps", in IEEE Micro Magazine, Special Issue on Top Picks in Computer Architecture Conferences, 2012. Google ScholarDigital Library
- M. Qureshi and G. Loh, "Fundamental Latency Trade-offs in Architecting DRAM Caches", in 45th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2012, pp. 235--246. Google ScholarDigital Library
- L. Zhao, R. Iyer, R. Illikkal, and D. Newell, "Exploring DRAM Cache Architecture for CMP Server Platforms", in 25th International Symposium on Computer Design (ICCD), 2007, pp. 55--62.Google ScholarCross Ref
- B. Black, M. Annavaram, N. Brekelbaum et al., "Die-Stacking (3D) Microarchitecture", in 39th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2006, pp. 469--479. Google ScholarDigital Library
- Y. Deng and W. Maly, "Interconnect Characteristics of 2.5-D System Integration Scheme", in International Symposium on Physical Design (ISPD), 2001, pp. 171--175. Google ScholarDigital Library
- F. Hameed, L. Bauer, and J. Henkel, "Adaptive Cache Management for a Combined SRAM and DRAM Cache Hierarchy for Multi-Cores", in 15th conference on Design, Automation and Test in Europe (DATE), 2013, pp. 77--82. Google ScholarDigital Library
- S. Thoziyoor, J. Muralimanohart, R. and Ahn, and N. Jouppi, "CACTI 5.1 HPL 2008/20, HP Labs", 2008.Google Scholar
- G. Loh, S. Subramaniam, and Y. Xie, "Zesto: A Cycle-Level Simulator for Highly Detailed Microarchitecture Exploration", in International Symposium on Performance Analysis of Systems and Software (ISPASS), 2009.Google ScholarCross Ref
- "Standard Performance Evaluation Corporation", http://www.spec.org.Google Scholar
- S. Rixner, W. Dally, U. Kapasi et al., "Memory Access Scheduling", in 32nd International Symposium on Computer Architecture (ISCA), 2000, pp. 128--138. Google ScholarDigital Library
- Reducing Latency in an SRAM/DRAM Cache Hierarchy via a Novel Tag-Cache Architecture
Recommendations
A fully associative, tagless DRAM cache
ISCA'15This paper introduces a tagless cache architecture for large in-package DRAM caches. The conventional die-stacked DRAM cache has both a TLB and a cache tag array, which are responsible for virtual-to-physical and physical-to-cache address translation, ...
Criticality aware tiered cache hierarchy: a fundamental relook at multi-level cache hierarchies
ISCA '18: Proceedings of the 45th Annual International Symposium on Computer ArchitectureOn-die caches are a popular method to help hide the main memory latency. However, it is difficult to build large caches without substantially increasing their access latency, which in turn hurts performance. To overcome this difficulty, on-die caches ...
Adaptive cache management for a combined SRAM and DRAM cache hierarchy for multi-cores
DATE '13: Proceedings of the Conference on Design, Automation and Test in EuropeOn-chip DRAM caches may alleviate the memory bandwidth problem in future multi-core architectures through reducing off-chip accesses via increased cache capacity. For memory intensive applications, recent research has demonstrated the benefits of ...
Comments