skip to main content
research-article

FeatherNet: An Accelerated Convolutional Neural Network Design for Resource-constrained FPGAs

Published:28 March 2019Publication History
Skip Abstract Section

Abstract

Convolutional Neural Network (ConvNet or CNN) algorithms are characterized by a large number of model parameters and high computational complexity. These two requirements have made it challenging for implementations on resource-limited FPGAs. The challenges are magnified when considering designs for low-end FPGAs. While previous work has demonstrated successful ConvNet implementations with high-end FPGAs, this article presents a ConvNet accelerator design that enables the implementation of complex deep ConvNet architectures on resource-constrained FPGA platforms aimed at the IoT market. We call the design “FeatherNet” for its light resource utilization. The implementations are VHDL-based providing flexibility in design optimizations. As part of the design process, new methods are introduced to address several design challenges. The first method is a novel stride-aware graph-based method targeted at ConvNets that aims at achieving efficient signal processing with reduced resource utilization. The second method addresses the challenge of determining the minimal precision arithmetic needed while preserving high accuracy. For this challenge, we propose variable-width dynamic fixed-point representations combined with a layer-by-layer design-space pruning heuristic across the different layers of the deep ConvNet model. The third method aims at achieving a modular design that can support different types of ConvNet layers while ensuring low resource utilization. For this challenge, we propose the modules to be relatively small and composed of computational filters that can be interconnected to build an entire accelerator design. These model elements can be easily configured through HDL parameters (e.g., layer type, mask size, stride, etc.) to meet the needs of specific ConvNet implementations and thus they can be reused to implement a wide variety of ConvNet architectures. The fourth method addresses the challenge of design portability between two different FPGA vendor platforms, namely, Intel/Altera and Xilinx. For this challenge, we propose to instantiate the device-specific hardware blocks needed in each computational filter, rather than relying on the synthesis tools to infer these blocks, while keeping track of the similarities and differences between the two platforms. We believe that the solutions to these design challenges further advance knowledge as they can benefit designers and other researchers using similar devices or facing similar challenges. Our results demonstrated the success of addressing the design challenges and achieving low (30%) resource utilization for the low-end FPGA platforms: Zedboard and Cyclone V. The design overcame the limitation of designs targeted for high-end platforms and that cannot fit on low-end IoT platforms. Furthermore, our design showed superior performance results (measured in terms of [Frame/s/W] per Dollar) compared to high-end optimized designs.

References

  1. Kamel Abdelouahab, Maxime Pelcat, Jocelyn Serot, Cedric Bourrasset, and Francois Berry. 2017. Tactics to directly map CNN graphs on embedded FPGAs. IEEE Embed. Syst. Lett. 9, 4 (Dec. 2017), 113--116.Google ScholarGoogle Scholar
  2. Manoj Alwani, Han Chen, Michael Ferdman, and Peter Milder. 2016. Fused-layer CNN accelerators. In Proceedings of the 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO’16). IEEE Press, Piscataway, NJ. Retrieved from http://dl.acm.org/citation.cfm?id=3195638.3195664. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Avnet. 2017. ZedBoard. Retrieved from http://zedboard.org/product/zedboard.Google ScholarGoogle Scholar
  4. Utku Aydonat, Shane O’Connell, Davor Capalija, Andrew C. Ling, and Gordon R. Chiu. 2017. An OpenCL deep-learning accelerator on Arria 10. In Proceedings of the ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA’17). ACM, New York, NY, 55--64. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. David Blaauw, Dennis Sylvester, Prabal Dutta, Yoonmyung Lee, Inhee Lee, Suyoung Bang, Yejoong Kim, Gyouho Kim, Pat Pannuto, Ye sheng Kuo, Dongmin Yoon, Wanyeong Jung, ZhiYoong Foo, Yen-Po Chen, Sechang Oh, Seokhyeon Jeong, and Mun Ho Choi. 2014. IoT design space challenges: Circuits and systems. In Proceedings of the Symposium on VLSI Technology (VLSITechnology’14). IEEE, 1--2.Google ScholarGoogle ScholarCross RefCross Ref
  6. BVLC. 2001. Model Zoo. Retrieved from http://ccrma.stanford.edu/∼jos/bayes/bayes.html.Google ScholarGoogle Scholar
  7. Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or . Retrieved from http://arxiv.org/abs/1602.02830.Google ScholarGoogle Scholar
  8. Alpha Data. 2017. An Open Source FPGA CNN Library. Retrieved from ftp://ftp.alpha-data.com/pub/appnotes/cnn/ad-an-0055_v1_0.pdf.Google ScholarGoogle Scholar
  9. Dan E. Dudgeon and Russell M. Mersereau. 1983. Multidimensional Digital Signal Processing. Prentice-Hall, Englewood Cliffs, NJ. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Clément Farabet, Berin Martini, Polina Akselrod, Selçuk Talay, Yann LeCun, and Eugenio Culurciello. 2010. Hardware accelerated convolutional neural networks for synthetic vision systems. In Proceedings of the IEEE International Symposium on Circuits and Systems. IEEE, 257--260.Google ScholarGoogle ScholarCross RefCross Ref
  11. Clément Farabet, Berin Martini, Benoit Corda, Polina Akselrod, Eugenio Culurciello, and Yann LeCun. 2011. NeuFlow: A runtime reconfigurable dataflow processor for vision. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’11). IEEE, 109--116.Google ScholarGoogle ScholarCross RefCross Ref
  12. Clément Farabet, Cyril Poulet, Jefferson Y. Han, and Yann LeCun. 2009. CNP: An FPGA-based processor for convolutional networks. In Proceedings of the International Conference on Field Programmable Logic and Applications. IEEE, 32--37.Google ScholarGoogle ScholarCross RefCross Ref
  13. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. 2015. Deep learning with limited numerical precision. In Proceedings of the 32nd International Conference on International Conference on Machine Learning (ICML’15). JMLR.org, 1737--1746. Retrieved from http://dl.acm.org/citation.cfm?id=3045118.3045303. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Philipp M. Gysel. 2016. Ristretto: Hardware-oriented approximation of convolutional neural networks. Master’s thesis. University of California, Davis, Davis, CA.Google ScholarGoogle Scholar
  15. Song Han, Huizi Mao, and William J. Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. Retrieved from http://arxiv.org/abs/1510.00149.Google ScholarGoogle Scholar
  16. Scott Hauck and Andre DeHon. 2007. Reconfigurable Computing: The Theory and Practice of FPGA-Based Computation. Morgan Kaufmann Publishers Inc., San Francisco, CA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2017. Quantized neural networks: Training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18, 1 (Jan. 2017), 6869--6898. Retrieved from http://dl.acm.org/citation.cfm?id=3122009.3242044. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Intel. 2014. Intel SDK for OpenCL Applications. Retrieved from https://software.intel.com/en-us/intel-opencl.Google ScholarGoogle Scholar
  19. Intel/Altera. 2017. Cyclone V. Retrieved from https://www.altera.com/products/fpga/cyclone-series/cyclone-v/overview.html.Google ScholarGoogle Scholar
  20. Intel/Altera. 2017. Cyclone V-GX FPGA Development Kit. Retrieved from https://www.altera.com/products/boards_and_kits/dev-kits/altera/kit-cyclone-v-gx.html.Google ScholarGoogle Scholar
  21. Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G. Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2704--2713.Google ScholarGoogle ScholarCross RefCross Ref
  22. Hrishikesh Jayakumar, Kangwoo Lee, Woo Suk Lee, Arnab Raha, Younghyun Kim, and Vijay Raghunathan. 2014. Powering the internet of things. In Proceedings of the International Symposium on Low Power Electronics and Design (ISLPED’14). ACM, New York, NY, 375--380. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia (MM’14). ACM, New York, NY, 675--678. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 1097--1105. Retrieved from http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521, 7553 (May 2015), 436.Google ScholarGoogle ScholarCross RefCross Ref
  26. Fengfu Li and Bin Liu. 2016. Ternary weight networks. Retrieved from http://arxiv.org/abs/1605.04711.Google ScholarGoogle Scholar
  27. Darryl D. Lin, Sachin S. Talathi, and V. Sreekanth Annapureddy. 2016. Fixed point quantization of deep convolutional networks. In Proceedings of the 33rd International Conference on Machine Learning (ICML’16). JMLR.org, 2849--2858. Retrieved from http://dl.acm.org/citation.cfm?id=3045390.3045690. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Alan V. Oppenheim, Ronald W. Schafer, and John R. Buck. 1999. Discrete-time Signal Processing (2nd Ed.). Prentice-Hall, Inc., Upper Saddle River, NJ. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Keshab K. Parhi. 1999. VLSI Digital Signal Processing Systems Design and Implementation. Wiley 8 Songs, Inc., New York, NY.Google ScholarGoogle Scholar
  30. Peng Peng, You Mingyu, and Xu Weisheng. 2017. Running 8-bit dynamic fixed-point convolutional neural network on low-cost ARM platforms. In Proceedings of the Chinese Automation Congress (CAC’17). IEEE, 4564--4568.Google ScholarGoogle ScholarCross RefCross Ref
  31. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. XNOR-Net: ImageNet classification using binary convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV’16), Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.). Springer International Publishing, Cham, 525--542.Google ScholarGoogle ScholarCross RefCross Ref
  32. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115, 3 (Dec. 2015), 211--252. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Hardik Sharma, Jongse Park, Divya Mahajan, Emmanuel Amaro, Joon Kyung Kim, Chenkai Shao, Asit Mishra, and Hadi Esmaeilzadeh. 2016. From high-level deep neural models to FPGAs. In Proceedings of the 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO’16). IEEE Press, Piscataway, NJ. Retrieved from http://dl.acm.org/citation.cfm?id=3195638.3195659. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Yongming Shen, Michael Ferdman, and Peter Milder. 2017. Maximizing CNN accelerator efficiency through resource partitioning. SIGARCH Comput. Archit. News 45, 2 (June 2017), 535--547. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. Retrieved from http://arxiv.org/abs/1409.1556.Google ScholarGoogle Scholar
  36. Hemendra Singh. 2018. How Much Does it Cost to Develop an IoT Application? Retrieved from http://customerthink.com/how-much-does-it-cost-to-develop-an-iot-application/.Google ScholarGoogle Scholar
  37. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1--9.Google ScholarGoogle ScholarCross RefCross Ref
  38. Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. 2011. Improving the speed of neural networks on CPUs. In Proceedings of the Deep Learning and Unsupervised Feature Learning NIPS Workshop, Vol. 1. Citeseer, 4.Google ScholarGoogle Scholar
  39. Samuel Williams, Andrew Waterman, and David Patterson. 2009. Roofline: An insightful visual performance model for multicore architectures. Commun. ACM 52, 4 (Apr. 2009), 65--76. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Xilinx. 2017. Zynq-7000: All Programmable SoC with Hardware and Software Programmability. Retrieved from https://www.xilinx.com/products/silicon-devices/soc/zynq-7000.html.Google ScholarGoogle Scholar
  41. Xilinx. 2018. Vivado High-Level Synthesis. Retrieved from https://www.xilinx.com/products/design-tools/vivado/integration/esl-design.html.Google ScholarGoogle Scholar
  42. Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision (ECCV’14). Springer, Springer International Publishing, 818--833.Google ScholarGoogle Scholar
  43. Chen Zhang, Peng Li, Guangyu Sun, Yijin Guan, Bingjun Xiao, and Jason Cong. 2015. Optimizing FPGA-based accelerator design for deep convolutional neural networks. In Proceedings of the ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA’15). ACM, New York, NY, 161--170. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. 2016. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. Retrieved from http://arxiv.org/abs/1606.06160.Google ScholarGoogle Scholar

Index Terms

  1. FeatherNet: An Accelerated Convolutional Neural Network Design for Resource-constrained FPGAs

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Reconfigurable Technology and Systems
          ACM Transactions on Reconfigurable Technology and Systems  Volume 12, Issue 2
          June 2019
          117 pages
          ISSN:1936-7406
          EISSN:1936-7414
          DOI:10.1145/3322884
          • Editor:
          • Deming Chen
          Issue’s Table of Contents

          Copyright © 2019 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 28 March 2019
          • Accepted: 1 January 2019
          • Revised: 1 November 2018
          • Received: 1 November 2017
          Published in trets Volume 12, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format