skip to main content
10.1145/3299874.3319490acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article

Design Methodology for Embedded Approximate Artificial Neural Networks

Published: 13 May 2019 Publication History

Abstract

Artificial neural networks (ANNs) have demonstrated significant promise while implementing recognition and classification applications. The implementation of pre-trained ANNs on embedded systems requires representation of data and design parameters in low-precision fixed-point formats; which often requires retraining of the network. For such implementations, the multiply-accumulate operation is the main reason for resultant high resource and energy requirements. To address these challenges, we present Rox-ANN, a design methodology for implementing ANNs using processing elements (PEs) designed with low-precision fixed-point numbers and high performance and reduced-area approximate multipliers on FPGAs. The trained design parameters of the ANN are analyzed and clustered to optimize the total number of approximate multipliers required in the design. With our methodology, we achieve insignificant loss in application accuracy. We evaluated the design using a LeNet based implementation of the MNIST digit recognition application. The results show a 65.6%, 55.1% and 18.9% reduction in area, energy consumption and latency for a PE using 8-bit precision weights and activations and approximate arithmetic units, when compared to 16-bit full precision, accurate arithmetic PEs.

References

[1]
Murat Alçın, İhsan Pehlivan, and İsmail Koyuncu. 2016. Hardware design and implementation of a novel ANN-based chaotic generator in FPGA. Optik, Vol. 127, 13 (2016), 5500--5505.
[2]
Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. 2014. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In ACM Sigplan Notices, Vol. 49. ACM, 269--284.
[3]
Yu-Hsin Chen, Tushar Krishna, Joel S. Emer, and Vivienne Sze. 2017. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE Journal of Solid-State Circuits, Vol. 52, 1 (2017), 127--138.
[4]
George E. Dahl, Dong Yu, Li Deng, and Alex Acero. 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on audio, speech, and language processing, Vol. 20, 1 (2012), 30--42.
[5]
Zidong Du, Krishna Palem, Avinash Lingamneni, Olivier Temam, Yunji Chen, and Chengyong Wu. 2014. Leveraging the error resilience of machine-learning applications for designing highly energy efficient accelerators. In 2014 19th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 201--206.
[6]
K. Guo, L. Sui, J. Qiu, J. Yu, J. Wang, S. Yao, S. Han, Y. Wang, and H. Yang. 2018. Angel-Eye: A Complete Design Flow for Mapping CNN Onto Embedded FPGA. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 37, 1 (Jan 2018), 35--47.
[7]
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2017. Quantized neural networks: Training neural networks with low precision weights and activations. The Journal of Machine Learning Research, Vol. 18, 1 (2017), 6869--6898.
[8]
Kyuyeon Hwang and Wonyong Sung. 2014. Fixed-point feedforward deep neural network design using weights +1, 0, and -1. In 2014 IEEE Workshop on Signal Processing Systems (SiPS). IEEE, 1--6.
[9]
Patrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, Raquel Urtasun, and Andreas Moshovos. 2015. Reduced-precision strategies for bounded memory in deep neural nets. arXiv preprint arXiv:1511.05236 (2015).
[10]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.
[11]
Uro? Lotric and Patricio Bulic. 2012. Applicability of approximate multipliers in hardware neural networks. Neurocomputing, Vol. 96 (2012), 57--65.
[12]
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks. CoRR, Vol. abs/1603.05279 (2016). arxiv: 1603.05279 http://arxiv.org/abs/1603.05279
[13]
Mohammad Samragh, Mohammad Ghasemzadeh, and Farinaz Koushanfar. 2017. Customizing neural networks for efficient fpga implementation. In 2017 IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). IEEE, 85--92.
[14]
Salim Ullah, Sanjeev Sripadraj Murthy, and Akash Kumar. 2018. SMApproxlib: Library of FPGA-based Approximate Multipliers. In Proceedings of the 55th Annual Design Automation Conference (DAC '18). ACM, New York, NY, USA, Article 157, 6 pages.
[15]
Swagath Venkataramani, Ashish Ranjan, Kaushik Roy, and Anand Raghunathan. 2014. AxNN: energy-efficient neuromorphic systems using approximate computing. In Proceedings of the 2014 international symposium on Low power electronics and design . 27--32.
[16]
Y. Wang, J. Xu, Y. Han, H. Li, and X. Li. 2016. DeepBurning: Automatic generation of FPGA-based learning accelerators for the Neural Network family. In 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC). 1--6.
[17]
Chen Zhang, Peng Li, Guangyu Sun, Yijin Guan, Bingjun Xiao, and Jason Cong. 2015. Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks. In Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA '15). ACM, New York, NY, USA, 161--170.
[18]
Qian Zhang, Ting Wang, Ye Tian, Feng Yuan, and Qiang Xu. 2015. ApproxANN: An approximate computing framework for artificial neural network. In Proceedings of the 2015 DATE. 701--706.

Cited By

View all
  • (2023)Stress-Resiliency of AI Implementations on FPGAs2023 33rd International Conference on Field-Programmable Logic and Applications (FPL)10.1109/FPL60245.2023.00057(333-338)Online publication date: 4-Sep-2023
  • (2023)An efficient design methodology to speed up the FPGA implementation of artificial neural networksEngineering Science and Technology, an International Journal10.1016/j.jestch.2023.10154247(101542)Online publication date: Nov-2023
  • (2023)Platform-Based Design of Embedded Neuromorphic SystemsEmbedded Machine Learning for Cyber-Physical, IoT, and Edge Computing10.1007/978-3-031-19568-6_12(337-358)Online publication date: 1-Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
GLSVLSI '19: Proceedings of the 2019 Great Lakes Symposium on VLSI
May 2019
562 pages
ISBN:9781450362528
DOI:10.1145/3299874
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 May 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. approximate computing
  2. artificial neural networks (anns)
  3. fpga

Qualifiers

  • Research-article

Conference

GLSVLSI '19
Sponsor:
GLSVLSI '19: Great Lakes Symposium on VLSI 2019
May 9 - 11, 2019
VA, Tysons Corner, USA

Acceptance Rates

Overall Acceptance Rate 312 of 1,156 submissions, 27%

Upcoming Conference

GLSVLSI '25
Great Lakes Symposium on VLSI 2025
June 30 - July 2, 2025
New Orleans , LA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)8
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Stress-Resiliency of AI Implementations on FPGAs2023 33rd International Conference on Field-Programmable Logic and Applications (FPL)10.1109/FPL60245.2023.00057(333-338)Online publication date: 4-Sep-2023
  • (2023)An efficient design methodology to speed up the FPGA implementation of artificial neural networksEngineering Science and Technology, an International Journal10.1016/j.jestch.2023.10154247(101542)Online publication date: Nov-2023
  • (2023)Platform-Based Design of Embedded Neuromorphic SystemsEmbedded Machine Learning for Cyber-Physical, IoT, and Edge Computing10.1007/978-3-031-19568-6_12(337-358)Online publication date: 1-Oct-2023
  • (2022)DFSynthesizer: Dataflow-based Synthesis of Spiking Neural Networks to Neuromorphic HardwareACM Transactions on Embedded Computing Systems10.1145/347915621:3(1-35)Online publication date: 28-May-2022
  • (2021)NeuroXplorer 1.0: An Extensible Framework for Architectural Exploration with Spiking Neural NetworksInternational Conference on Neuromorphic Systems 202110.1145/3477145.3477156(1-9)Online publication date: 27-Jul-2021
  • (2021)Dynamic Reliability Management in Neuromorphic ComputingACM Journal on Emerging Technologies in Computing Systems10.1145/346233017:4(1-27)Online publication date: 21-Jul-2021
  • (2021)Special Session: Reliability Analysis for AI/ML Hardware2021 IEEE 39th VLSI Test Symposium (VTS)10.1109/VTS50974.2021.9441050(1-10)Online publication date: 25-Apr-2021
  • (2021)Improving Inference Lifetime of Neuromorphic Systems via Intelligent Synapse Mapping2021 IEEE 32nd International Conference on Application-specific Systems, Architectures and Processors (ASAP)10.1109/ASAP52443.2021.00010(17-24)Online publication date: Jul-2021
  • (2020)Dissecting Convolutional Neural Networks for Efficient Implementation on Constrained Platforms2020 33rd International Conference on VLSI Design and 2020 19th International Conference on Embedded Systems (VLSID)10.1109/VLSID49098.2020.00043(149-154)Online publication date: Jan-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media