skip to main content
10.1145/3264746.3264791acmconferencesArticle/Chapter ViewAbstractPublication PagesracsConference Proceedingsconference-collections
research-article

Development of vegetation mapping with deep convolutional neural network

Published:09 October 2018Publication History

ABSTRACT

The Precision Agriculture (PA) plays a crucial part in the agricultural industry about improving the decision-making process. It aims to optimally allocate the resources to maintain the sustainable productivity of farmland and reduce the use of chemical compounds. [17] However, the on-site inspection of vegetations often falls to researchers' trained eye and experience, when it deals with the identification of the non-crop vegetations. Deep Convolution Neural Network (CNN) can be deployed to mitigate the cost of manual classification. Although CNN outperforms the other traditional classifiers, such as Support Vector Machine, it is still in question whether CNN can be deployable in an industrial environment. In this paper, we conducted a study on the feasibility of CNN for Vegetation Mapping on lawn inspection for weeds. We would like to study the possibility of expanding the concept to the on-site, near realtime, crop site inspections, by evaluating the generated results.

References

  1. Rawat, W., & Wang, Z. (2017). Deep convolutional neural networks for image classification: A comprehensive review. Neural computation, 29(9), 2352--2449. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770--778).Google ScholarGoogle Scholar
  3. Wang, J., & Perez, L. (2017). The effectiveness of data augmentation in image classification using deep learning (No. 300). Technical report.Google ScholarGoogle Scholar
  4. Torralba, A., & Efros, A. A. (2011, June). Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on (pp. 1521--1528). IEEE. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Zou, W. W., & Yuen, P. C. (2012). Very low-resolution face recognition problem. IEEE Transactions on Image Processing, 21(1), 327--340. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. He, K., Zhang, X., Ren, S., & Sun, J. (2016, October). Identity mappings in deep residual networks. In European Conference on Computer Vision (pp. 630--645). Springer, Cham.Google ScholarGoogle ScholarCross RefCross Ref
  7. Chatfield, K., Simonyan, K., Vedaldi, A., & Zisserman, A. (2014). Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv.1405.3531.Google ScholarGoogle Scholar
  8. Zhu, X., Vondrick, C., Ramanan, D., & Fowlkes, C. C. (2012, September). Do We Need More Training Data or Better Models for Object Detection? In BMVC (Vol. 3, p. 5).Google ScholarGoogle Scholar
  9. Taylor, L., & Nitschke, G. (2017). Improving Deep Learning using Generic Data Augmentation. arXiv preprint arXiv:1708.06020.Google ScholarGoogle Scholar
  10. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097--1105). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Chen, X. W., & Lin, X. (2014). Big data deep learning: challenges and perspectives. IEEE access, 2, 514--525.Google ScholarGoogle Scholar
  12. Sa, I., Ge, Z., Dayoub, F., Upcroft, B., Perez, T., & McCool, C. (2016). Deepfruits: A fruit detection system using deep neural networks. Sensors, 16(8), 1222.Google ScholarGoogle Scholar
  13. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929--1958. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Shen, L. (2017). End-to-end Training for Whole Image Breast Cancer Diagnosis using An All Convolutional Design. arXiv preprint arXiv:1708.09427.Google ScholarGoogle Scholar
  15. Yu, F., & Koltun, V. (2015). Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122.Google ScholarGoogle Scholar
  16. Masko, D., & Hensman, P. (2015). The impact of imbalanced training data for convolutional neural networks.Google ScholarGoogle Scholar
  17. Bongiovanni, R., & Lowenberg-DeBoer, J. (2004). Precision agriculture and sustainability. Precision agriculture, 5(4), 359--387.Google ScholarGoogle Scholar
  18. Suh, S. H., Kim, D. Y., Jhang, J. E., Byamukama, E., Hatfield, G., & Shin, S. Y. (2017, September). Identification of the White-Mold affected Soybean fields by using Multispectral Imageries, Spatial Autocorrelation and Support Vector Machine. In Proceedings of the International Conference on Research in Adaptive and Convergent Systems (pp. 104--109). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.Google ScholarGoogle Scholar
  20. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., ... & Berg, A. C. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211--252. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Development of vegetation mapping with deep convolutional neural network

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          RACS '18: Proceedings of the 2018 Conference on Research in Adaptive and Convergent Systems
          October 2018
          355 pages
          ISBN:9781450358859
          DOI:10.1145/3264746

          Copyright © 2018 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 9 October 2018

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate393of1,581submissions,25%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader