skip to main content
10.1145/2541940.2541967acmconferencesArticle/Chapter ViewAbstractPublication PagesasplosConference Proceedingsconference-collections
research-article

DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning

Published:24 February 2014Publication History

ABSTRACT

Machine-Learning tasks are becoming pervasive in a broad range of domains, and in a broad range of systems (from embedded systems to data centers). At the same time, a small set of machine-learning algorithms (especially Convolutional and Deep Neural Networks, i.e., CNNs and DNNs) are proving to be state-of-the-art across many applications. As architectures evolve towards heterogeneous multi-cores composed of a mix of cores and accelerators, a machine-learning accelerator can achieve the rare combination of efficiency (due to the small number of target algorithms) and broad application scope.

Until now, most machine-learning accelerator designs have focused on efficiently implementing the computational part of the algorithms. However, recent state-of-the-art CNNs and DNNs are characterized by their large size. In this study, we design an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance and energy.

We show that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s (key NN operations such as synaptic weight multiplications and neurons outputs additions) in a small footprint of 3.02 mm2 and 485 mW; compared to a 128-bit 2GHz SIMD processor, the accelerator is 117.87x faster, and it can reduce the total energy by 21.08x. The accelerator characteristics are obtained after layout at 65 nm. Such a high throughput in a small footprint can open up the usage of state-of-the-art machine-learning algorithms in a broad set of systems and for a broad set of applications.

References

  1. R. S. Amant, D. A. Jimenez, and D. Burger. Low-power, high-performance analog neural branch prediction. In International Symposium on Microarchitecture, Como, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. C. Bienia, S. Kumar, J. P. Singh, and K. Li. The PARSEC benchmark suite: Characterization and architectural implications. In International Conference on Parallel Architectures and Compilation Techniques, New York, New York, USA, 2008. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Chakradhar, M. Sankaradas, V. Jakkula, and S. Cadambi. A dynamically configurable coprocessor for convolutional neural networks. In International symposium on Computer Architecture, page 247, Saint Malo, France, June 2010. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. T. Chen, Y. Chen, M. Duranton, Q. Guo, A. Hashmi, M. Lipasti, A. Nere, S. Qiu, M. Sebag, and O. Temam. BenchNN: On the Broad Potential Application Scope of Hardware Neural Network Accelerators. In International Symposium on Workload Characterization, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. A. Coates, B. Huval, T. Wang, D. J. Wu, and A. Y. Ng. Deep learning with cots hpc systems. In International Conference on Machine Learning, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Cortes and V. Vapnik. Support-Vector Networks. In Machine Learning, pages 273--297, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. G. Dahl, T. Sainath, and G. Hinton. Improving Deep Neural Networks for LVCSR using Rectified Linear Units and Dropout. In International Conference on Acoustics, Speech and Signal Processing, 2013.Google ScholarGoogle Scholar
  8. S. Draghici. On the capabilities of neural networks using limited precision weights. Neural Netw., 15(3):395--414, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Z. Du, A. Lingamneni, Y. Chen, K. V. Palem, O. Temam, and C. Wu. Leveraging the Error Resilience of Machine-Learning Applications for Designing Highly Energy Efficient Accelerators. In Asia and South Pacific Design Automation Conference, 2014.Google ScholarGoogle Scholar
  10. H. Esmaeilzadeh, E. Blem, R. S. Amant, K. Sankaralingam, and D. Burger. Dark Silicon and the End of Multicore Scaling. In Proceedings of the 38th International Symposium on Computer Architecture (ISCA), June 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. H. Esmaeilzadeh, A. Sampson, L. Ceze, and D. Burger. Neural Acceleration for General-Purpose Approximate Programs. In International Symposium on Microarchitecture, number 3, pages 1--6, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. K. Fan, M. Kudlur, G. S. Dasika, and S. A. Mahlke. Bridging the computation gap between programmable processors and hardwired accelerators. In HPCA, pages 313--322. IEEE Computer Society, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  13. C. Farabet, B. Martini, B. Corda, P. Akselrod, E. Culurciello, and Y. LeCun. NeuFlow: A runtime reconfigurable dataflow processor for vision. In CVPR Workshop, pages 109--116. Ieee, June 2011.Google ScholarGoogle ScholarCross RefCross Ref
  14. R. Hameed, W. Qadeer, M. Wachs, O. Azizi, A. Solomatnikov, B. C. Lee, S. Richardson, C. Kozyrakis, and M. Horowitz. Understanding sources of inefficiency in general-purpose chips. In International Symposium on Computer Architecture, page 37, New York, New York, USA, 2010. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. Hashmi, A. Nere, J. J. Thomas, and M. Lipasti. A case for neuromorphic ISAs. In International Conference on Architectural Support for Programming Languages and Operating Systems, New York, NY, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. G. Hinton and N. Srivastava. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv: łdots, pages 1--18, 2012.Google ScholarGoogle Scholar
  17. J. L. Holi and J.-N. Hwang. Finite Precision Error Analysis of Neural Network Hardware Implementations. IEEE Transactions on Computers, 42(3):281--290, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Holler, S. Tam, H. Castro, and R. Benson. An electrically trainable artificial neural network (ETANN) with 10240 "floating gate" synapses. In Artificial neural networks, pages 50--55, Piscataway, NJ, USA, 1990. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. P. Huang, X. He, J. Gao, and L. Deng. Learning deep structured semantic models for web search using clickthrough data. In International Conference on Information and Knowledge Management, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. M. Khan, D. R. Lester, L. A. Plana, A. Rast, X. Jin, E. Painkras, and S. B. Furber. SpiNNaker: Mapping neural networks onto a massively-parallel chip multiprocessor. In IEEE International Joint Conference on Neural Networks (IJCNN), pages 2849--2856. Ieee, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  21. J.-y. Kim, S. Member, M. Kim, S. Lee, J. Oh, K. Kim, and H.-j. Yoo. A 201.4 GOPS 496 mW Real-Time Multi-Object Recognition Processor With Bio-Inspired Neural Perception Engine. IEEE Journal of Solid-State Circuits, 45(1):32--45, Jan. 2010.Google ScholarGoogle Scholar
  22. E. J. King and E. E. Swartzlander Jr. Data-dependent truncation scheme for parallel multipliers. In Signals, Systems & Computers, 1997. Conference Record of the Thirty-First Asilomar Conference on, volume 2, pages 1178--1182. IEEE, 1997.Google ScholarGoogle Scholar
  23. D. Larkin, A. Kinane, V. Muresan, and N. E. O'Connor. An Efficient Hardware Architecture for a Neural Network Activation Function Generator. In J. Wang, Z. Yi, J. M. Zurada, B.-L. Lu, and H. Yin, editors, ISNN (2), volume 3973 of Lecture Notes in Computer Science, pages 1319--1327. Springer, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. D. Larkin, A. Kinane, and N. E. O'Connor. Towards Hardware Acceleration of Neuroevolution for Multimedia Processing Applications on Mobile Devices. In ICONIP (3), pages 1178--1188, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In International Conference on Machine Learning, pages 473--480, New York, New York, USA, 2007. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Q. V. Le, M. A. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng. Building High-level Features Using Large Scale Unsupervised Learning. In International Conference on Machine Learning, June 2012.Google ScholarGoogle Scholar
  27. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 1998.Google ScholarGoogle Scholar
  28. S. Li, J. H. Ahn, R. D. Strong, J. B. Brockman, D. M. Tullsen, and N. P. Jouppi. McPAT: an integrated power, area, and timing modeling framework for multicore and manycore architectures. In Proceedings of the 42nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 42, pages 469--480, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. A. A. Maashri, M. Debole, M. Cotter, N. Chandramoorthy, Y. Xiao, V. Narayanan, and C. Chakrabarti. Accelerating neuromorphic vision algorithms for recognition. Proceedings of the 49th Annual Design Automation Conference on - DAC'12, page 579, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. P. Merolla, J. Arthur, F. Akopyan, N. Imam, R. Manohar, and D. Modha. A digital neurosynaptic core using embedded crossbar memory with 45pJ per spike in 45nm. In IEEE Custom Integrated Circuits Conference, pages 1--4. IEEE, Sept. 2011.Google ScholarGoogle ScholarCross RefCross Ref
  31. V. Mnih and G. Hinton. Learning to Label Aerial Images from Noisy Data. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 567--574, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. M. Muller. Dark Silicon and the Internet. In EE Times "Designing with ARM" virtual conference, 2010.Google ScholarGoogle Scholar
  33. W. Qadeer, R. Hameed, O. Shacham, P. Venkatesan, C. Kozyrakis, and M. A. Horowitz. Convolution engine: balancing efficiency & flexibility in specialized computing. In International Symposium on Computer Architecture, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. J. Schemmel, J. Fieres, and K. Meier. Wafer-scale integration of analog neural networks. In International Joint Conference on Neural Networks, pages 431--438. Ieee, June 2008.Google ScholarGoogle Scholar
  35. P. Sermanet, S. Chintala, and Y. LeCun. Convolutional Neural Networks Applied to House Numbers Digit Classification. In Pattern Recognition (ICPR), ...., 2012.Google ScholarGoogle Scholar
  36. P. Sermanet and Y. LeCun. Traffic sign recognition with multi-scale Convolutional Networks. In International Joint Conference on Neural Networks, pages 2809--2813. Ieee, July 2011.Google ScholarGoogle Scholar
  37. T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. Robust object recognition with cortex-like mechanisms. IEEE transactions on pattern analysis and machine intelligence, 29(3):411--26, Mar. 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. O. Temam. A Defect-Tolerant Accelerator for Emerging High-Performance Applications. In International Symposium on Computer Architecture, Portland, Oregon, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. O. Temam and N. Drach. Software assistance for data caches. Future Generation Computer Systems, 11(6):519--536, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. S. Thoziyoor, N. Muralimanohar, and J. Ahn. CACTI 5.1. HP Labs, Palo Alto, Tech, 2008.Google ScholarGoogle Scholar
  41. V. Vanhoucke, A. Senior, and M. Z. Mao. Improving the speed of neural networks on CPUs. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011.Google ScholarGoogle Scholar
  42. G. Venkatesh, J. Sampson, N. Goulding-hotta, S. K. Venkata, M. B. Taylor, and S. Swanson. QsCORES : Trading Dark Silicon for Scalable Energy Efficiency with Quasi-Specific Cores Categories and Subject Descriptors. In International Symposium on Microarchitecture, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. R. J. Vogelstein, U. Mallik, J. T. Vogelstein, and G. Cauwenberghs. Dynamically reconfigurable silicon array of spiking neurons with conductance-based synapses. IEEE Transactions on Neural Networks, 18(1):253--265, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. S. Yehia, S. Girbal, H. Berry, and O. Temam. Reconciling specialization and flexibility through compound circuits. In International Symposium on High Performance Computer Architecture, pages 277--288, Raleigh, North Carolina, Feb. 2009. Ieee.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ASPLOS '14: Proceedings of the 19th international conference on Architectural support for programming languages and operating systems
      February 2014
      780 pages
      ISBN:9781450323055
      DOI:10.1145/2541940

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 24 February 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      ASPLOS '14 Paper Acceptance Rate49of217submissions,23%Overall Acceptance Rate535of2,713submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader