ABSTRACT
Machine-Learning tasks are becoming pervasive in a broad range of domains, and in a broad range of systems (from embedded systems to data centers). At the same time, a small set of machine-learning algorithms (especially Convolutional and Deep Neural Networks, i.e., CNNs and DNNs) are proving to be state-of-the-art across many applications. As architectures evolve towards heterogeneous multi-cores composed of a mix of cores and accelerators, a machine-learning accelerator can achieve the rare combination of efficiency (due to the small number of target algorithms) and broad application scope.
Until now, most machine-learning accelerator designs have focused on efficiently implementing the computational part of the algorithms. However, recent state-of-the-art CNNs and DNNs are characterized by their large size. In this study, we design an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance and energy.
We show that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s (key NN operations such as synaptic weight multiplications and neurons outputs additions) in a small footprint of 3.02 mm2 and 485 mW; compared to a 128-bit 2GHz SIMD processor, the accelerator is 117.87x faster, and it can reduce the total energy by 21.08x. The accelerator characteristics are obtained after layout at 65 nm. Such a high throughput in a small footprint can open up the usage of state-of-the-art machine-learning algorithms in a broad set of systems and for a broad set of applications.
- R. S. Amant, D. A. Jimenez, and D. Burger. Low-power, high-performance analog neural branch prediction. In International Symposium on Microarchitecture, Como, 2008. Google ScholarDigital Library
- C. Bienia, S. Kumar, J. P. Singh, and K. Li. The PARSEC benchmark suite: Characterization and architectural implications. In International Conference on Parallel Architectures and Compilation Techniques, New York, New York, USA, 2008. ACM Press. Google ScholarDigital Library
- S. Chakradhar, M. Sankaradas, V. Jakkula, and S. Cadambi. A dynamically configurable coprocessor for convolutional neural networks. In International symposium on Computer Architecture, page 247, Saint Malo, France, June 2010. ACM Press. Google ScholarDigital Library
- T. Chen, Y. Chen, M. Duranton, Q. Guo, A. Hashmi, M. Lipasti, A. Nere, S. Qiu, M. Sebag, and O. Temam. BenchNN: On the Broad Potential Application Scope of Hardware Neural Network Accelerators. In International Symposium on Workload Characterization, 2012. Google ScholarDigital Library
- A. Coates, B. Huval, T. Wang, D. J. Wu, and A. Y. Ng. Deep learning with cots hpc systems. In International Conference on Machine Learning, 2013.Google ScholarDigital Library
- C. Cortes and V. Vapnik. Support-Vector Networks. In Machine Learning, pages 273--297, 1995. Google ScholarDigital Library
- G. Dahl, T. Sainath, and G. Hinton. Improving Deep Neural Networks for LVCSR using Rectified Linear Units and Dropout. In International Conference on Acoustics, Speech and Signal Processing, 2013.Google Scholar
- S. Draghici. On the capabilities of neural networks using limited precision weights. Neural Netw., 15(3):395--414, 2002. Google ScholarDigital Library
- Z. Du, A. Lingamneni, Y. Chen, K. V. Palem, O. Temam, and C. Wu. Leveraging the Error Resilience of Machine-Learning Applications for Designing Highly Energy Efficient Accelerators. In Asia and South Pacific Design Automation Conference, 2014.Google Scholar
- H. Esmaeilzadeh, E. Blem, R. S. Amant, K. Sankaralingam, and D. Burger. Dark Silicon and the End of Multicore Scaling. In Proceedings of the 38th International Symposium on Computer Architecture (ISCA), June 2011. Google ScholarDigital Library
- H. Esmaeilzadeh, A. Sampson, L. Ceze, and D. Burger. Neural Acceleration for General-Purpose Approximate Programs. In International Symposium on Microarchitecture, number 3, pages 1--6, 2012. Google ScholarDigital Library
- K. Fan, M. Kudlur, G. S. Dasika, and S. A. Mahlke. Bridging the computation gap between programmable processors and hardwired accelerators. In HPCA, pages 313--322. IEEE Computer Society, 2009.Google ScholarCross Ref
- C. Farabet, B. Martini, B. Corda, P. Akselrod, E. Culurciello, and Y. LeCun. NeuFlow: A runtime reconfigurable dataflow processor for vision. In CVPR Workshop, pages 109--116. Ieee, June 2011.Google ScholarCross Ref
- R. Hameed, W. Qadeer, M. Wachs, O. Azizi, A. Solomatnikov, B. C. Lee, S. Richardson, C. Kozyrakis, and M. Horowitz. Understanding sources of inefficiency in general-purpose chips. In International Symposium on Computer Architecture, page 37, New York, New York, USA, 2010. ACM Press. Google ScholarDigital Library
- A. Hashmi, A. Nere, J. J. Thomas, and M. Lipasti. A case for neuromorphic ISAs. In International Conference on Architectural Support for Programming Languages and Operating Systems, New York, NY, 2011. ACM. Google ScholarDigital Library
- G. Hinton and N. Srivastava. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv: łdots, pages 1--18, 2012.Google Scholar
- J. L. Holi and J.-N. Hwang. Finite Precision Error Analysis of Neural Network Hardware Implementations. IEEE Transactions on Computers, 42(3):281--290, 1993. Google ScholarDigital Library
- M. Holler, S. Tam, H. Castro, and R. Benson. An electrically trainable artificial neural network (ETANN) with 10240 "floating gate" synapses. In Artificial neural networks, pages 50--55, Piscataway, NJ, USA, 1990. IEEE Press. Google ScholarDigital Library
- P. Huang, X. He, J. Gao, and L. Deng. Learning deep structured semantic models for web search using clickthrough data. In International Conference on Information and Knowledge Management, 2013. Google ScholarDigital Library
- M. M. Khan, D. R. Lester, L. A. Plana, A. Rast, X. Jin, E. Painkras, and S. B. Furber. SpiNNaker: Mapping neural networks onto a massively-parallel chip multiprocessor. In IEEE International Joint Conference on Neural Networks (IJCNN), pages 2849--2856. Ieee, 2008.Google ScholarCross Ref
- J.-y. Kim, S. Member, M. Kim, S. Lee, J. Oh, K. Kim, and H.-j. Yoo. A 201.4 GOPS 496 mW Real-Time Multi-Object Recognition Processor With Bio-Inspired Neural Perception Engine. IEEE Journal of Solid-State Circuits, 45(1):32--45, Jan. 2010.Google Scholar
- E. J. King and E. E. Swartzlander Jr. Data-dependent truncation scheme for parallel multipliers. In Signals, Systems & Computers, 1997. Conference Record of the Thirty-First Asilomar Conference on, volume 2, pages 1178--1182. IEEE, 1997.Google Scholar
- D. Larkin, A. Kinane, V. Muresan, and N. E. O'Connor. An Efficient Hardware Architecture for a Neural Network Activation Function Generator. In J. Wang, Z. Yi, J. M. Zurada, B.-L. Lu, and H. Yin, editors, ISNN (2), volume 3973 of Lecture Notes in Computer Science, pages 1319--1327. Springer, 2006. Google ScholarDigital Library
- D. Larkin, A. Kinane, and N. E. O'Connor. Towards Hardware Acceleration of Neuroevolution for Multimedia Processing Applications on Mobile Devices. In ICONIP (3), pages 1178--1188, 2006. Google ScholarDigital Library
- H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In International Conference on Machine Learning, pages 473--480, New York, New York, USA, 2007. ACM Press. Google ScholarDigital Library
- Q. V. Le, M. A. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng. Building High-level Features Using Large Scale Unsupervised Learning. In International Conference on Machine Learning, June 2012.Google Scholar
- Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 1998.Google Scholar
- S. Li, J. H. Ahn, R. D. Strong, J. B. Brockman, D. M. Tullsen, and N. P. Jouppi. McPAT: an integrated power, area, and timing modeling framework for multicore and manycore architectures. In Proceedings of the 42nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 42, pages 469--480, New York, NY, USA, 2009. ACM. Google ScholarDigital Library
- A. A. Maashri, M. Debole, M. Cotter, N. Chandramoorthy, Y. Xiao, V. Narayanan, and C. Chakrabarti. Accelerating neuromorphic vision algorithms for recognition. Proceedings of the 49th Annual Design Automation Conference on - DAC'12, page 579, 2012. Google ScholarDigital Library
- P. Merolla, J. Arthur, F. Akopyan, N. Imam, R. Manohar, and D. Modha. A digital neurosynaptic core using embedded crossbar memory with 45pJ per spike in 45nm. In IEEE Custom Integrated Circuits Conference, pages 1--4. IEEE, Sept. 2011.Google ScholarCross Ref
- V. Mnih and G. Hinton. Learning to Label Aerial Images from Noisy Data. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 567--574, 2012.Google ScholarDigital Library
- M. Muller. Dark Silicon and the Internet. In EE Times "Designing with ARM" virtual conference, 2010.Google Scholar
- W. Qadeer, R. Hameed, O. Shacham, P. Venkatesan, C. Kozyrakis, and M. A. Horowitz. Convolution engine: balancing efficiency & flexibility in specialized computing. In International Symposium on Computer Architecture, 2013. Google ScholarDigital Library
- J. Schemmel, J. Fieres, and K. Meier. Wafer-scale integration of analog neural networks. In International Joint Conference on Neural Networks, pages 431--438. Ieee, June 2008.Google Scholar
- P. Sermanet, S. Chintala, and Y. LeCun. Convolutional Neural Networks Applied to House Numbers Digit Classification. In Pattern Recognition (ICPR), ...., 2012.Google Scholar
- P. Sermanet and Y. LeCun. Traffic sign recognition with multi-scale Convolutional Networks. In International Joint Conference on Neural Networks, pages 2809--2813. Ieee, July 2011.Google Scholar
- T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. Robust object recognition with cortex-like mechanisms. IEEE transactions on pattern analysis and machine intelligence, 29(3):411--26, Mar. 2007. Google ScholarDigital Library
- O. Temam. A Defect-Tolerant Accelerator for Emerging High-Performance Applications. In International Symposium on Computer Architecture, Portland, Oregon, 2012. Google ScholarDigital Library
- O. Temam and N. Drach. Software assistance for data caches. Future Generation Computer Systems, 11(6):519--536, 1995. Google ScholarDigital Library
- S. Thoziyoor, N. Muralimanohar, and J. Ahn. CACTI 5.1. HP Labs, Palo Alto, Tech, 2008.Google Scholar
- V. Vanhoucke, A. Senior, and M. Z. Mao. Improving the speed of neural networks on CPUs. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011.Google Scholar
- G. Venkatesh, J. Sampson, N. Goulding-hotta, S. K. Venkata, M. B. Taylor, and S. Swanson. QsCORES : Trading Dark Silicon for Scalable Energy Efficiency with Quasi-Specific Cores Categories and Subject Descriptors. In International Symposium on Microarchitecture, 2011. Google ScholarDigital Library
- R. J. Vogelstein, U. Mallik, J. T. Vogelstein, and G. Cauwenberghs. Dynamically reconfigurable silicon array of spiking neurons with conductance-based synapses. IEEE Transactions on Neural Networks, 18(1):253--265, 2007. Google ScholarDigital Library
- S. Yehia, S. Girbal, H. Berry, and O. Temam. Reconciling specialization and flexibility through compound circuits. In International Symposium on High Performance Computer Architecture, pages 277--288, Raleigh, North Carolina, Feb. 2009. Ieee.Google ScholarCross Ref
Index Terms
- DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning
Recommendations
DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning
ASPLOS '14Machine-Learning tasks are becoming pervasive in a broad range of domains, and in a broad range of systems (from embedded systems to data centers). At the same time, a small set of machine-learning algorithms (especially Convolutional and Deep Neural ...
DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning
ASPLOS '14Machine-Learning tasks are becoming pervasive in a broad range of domains, and in a broad range of systems (from embedded systems to data centers). At the same time, a small set of machine-learning algorithms (especially Convolutional and Deep Neural ...
From GPGPU to Many-Core: Nvidia Fermi and Intel Many Integrated Core Architecture
Comparing the architectures and performance levels of an Nvidia Fermi accelerator with an Intel MIC Architecture coprocessor demonstrates the benefit of the coprocessor for bringing highly parallel applications into, or even beyond, GPGPU performance ...
Comments