skip to main content
10.1145/3236024.3264841acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Salient-class location: help developers understand code change in code review

Published:26 October 2018Publication History

ABSTRACT

Code review involves a significant amount of human effort to understand the code change, because the information required to inspect code changes may distribute across multiple files that reviewers are not familiar with. Code changes are often organized as commits for review. In this paper, we found that most of the commits contain a salient class, which is saliently modified and causes the modification of the rest classes in a commit. Our user studies confirmed that identifying the salient class in a commit can facilitate reviewers in understanding code change. We model the salient class identification as a binary classification problem and extract a number of discriminative features from commit to characterize the salience of a class. The initial experiment result shows that the proposed approach can improve the efficiency of reviewers understanding code changes in code review.

References

  1. Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In 2013 35th International Conference on Software Engineering (ICSE). 712–721. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Vipin Balachandran. 2013. Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation. In International Conference on Software Engineering. 931–940. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Mike Barnett, Christian Bird, João Brunet, and Shuvendu K. Lahiri. 2015. Helping Developers Help Themselves: Automatic Decomposition of Code Review Changesets. In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (ICSE ’15). IEEE Press, Piscataway, NJ, USA, 134–144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Olga Baysal, Oleksii Kononenko, Reid Holmes, and Michael W. Godfrey. 2015. Investigating technical and non-technical factors influencing modern code review. Empirical Software Engineering 21, 3 (2015), 1–28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Fabian Beck and Stephan Diehl. 2011. On the congruence of modularity and code coupling. SIGSOFT/FSE 2011 - Proceedings of the 19th ACM SIGSOFT Symposium on Foundations of Software Engineering, 354 – 364. 2025113.2025162 Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Amiangshu Bosu, Jeffrey C. Carver, Christian Bird, Jonathan Orbeck, and Chris Chockley. 2017. Process Aspects and Social Dynamics of Contemporary Code Review: Insights from Open Source Development and Industrial Practice at Microsoft. IEEE Transactions on Software Engineering 43, 1 (2017), 56–75. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Gao Cuiyun, Zeng Jichuan, Michael. R. Lyu, and King Irwin. 2018. Online App Review Analysis for Identifying Emerging Issues. In 2018 40th International Conference on Software Engineering (ICSE). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Yuanrui Fan, Xin Xia, David Lo, and Shanping Li. 2018. Early prediction of merged code changes to prioritize reviewing tasks. Empirical Software Engineering (19 Mar 2018).Google ScholarGoogle Scholar
  9. Bavota Gabriele, Dit Bogdan, Oliveto Rocco, Massimilano Di Penta, Poshyvanyk Dneys, and De Lucia Andrea. 2013. An empirical study on the developers’ perception of software coupling. In 2013 35th International Conference on Software Engineering (ICSE). 692–701. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Malik Haroon and Ahmed. E. Hassan. 2008. Supporting software evolution using adaptive change propagation heuristics. In 2008 IEEE International Conference on Software Maintenance. 177–186.Google ScholarGoogle Scholar
  11. Ahmed E. Hassan and Richard C. Holt. 2004. Predicting Change Propagation in Software Systems. In Proceedings of the 20th IEEE International Conference on Software Maintenance (ICSM ’04). IEEE Computer Society, Washington, DC, USA, 284–293. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Lile P Hattori and Michele Lanza. 2008. On the nature of commits. In 2008 23rd IEEE/ACM International Conference on Automated Software Engineering-Workshops. IEEE, 63–71. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Yuan Huang, Xiangping Chen, Zhiyong Liu, Xiaonan Luo, and Zibin Zheng. 2017. Using discriminative feature in software entities for relevance identification of code changes. Journal of Software Evolution and Process 5 (2017), e1859.Google ScholarGoogle ScholarCross RefCross Ref
  14. Yuan Huang, Nan Jia, Qiang Zhou, Xiangping Chen, Yingfei Xiong, and Xiaonan Luo. 2018. Guiding developers to make informative commenting decisions in source code. In Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings. 260–261. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Oleksii Kononenko, Olga Baysal, and Michael W Godfrey. 2016. Code review quality: how developers see it. In International Conference on Software Engineering. 1028–1038. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Oleksii Kononenko, Olga Baysal, Latifa Guerrouj, Yaxin Cao, and Michael W. Godfrey. 2015. Investigating code review quality: Do people and participation matter?. In 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME). 111–120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Jun Li, Yingfei Xiong, Xuanzhe Liu, and Lu Zhang. 2013. How Does Web Service API Evolution Affect Clients?. In IEEE International Conference on Web Services. 300–307. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Dias Martin, Bacchelli Alberto, Gousios Georgios, Cassou Damien, and Ducasse Stephane. 2015. Untangling fine-grained code changes. In 2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER). 341–350.Google ScholarGoogle Scholar
  19. Shane Mcintosh, Yasutaka Kamei, Bram Adams, and Ahmed E. Hassan. 2016. An Empirical Study of the Impact of Modern Code Review Practices on Software Quality. Empirical Software Engineering 21, 5 (2016), 2146–2189. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. 10.1007/s10664-015-9381-9Google ScholarGoogle Scholar
  21. Murtuza Mukadam, Christian Bird, and Peter C. Rigby. 2013. Gerrit software code review data from Android. In Working Conference on Mining Software Repositories. 45–48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Yida Tao, Yingnong Dang, Tao Xie, Dongmei Zhang, and Sunghun Kim. 2012. How do software engineers understand code changes?: an exploratory study in industry. In ACM Sigsoft International Symposium on the Foundations of Software Engineering. 51. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Patanamon Thongtanunam, Shane McIntosh, Ahmed E. Hassan, and Hajimu Iida. 2016. Review participation in modern code review: An empirical study of the android, Qt, and OpenStack projects. Empirical Software Engineering (10 2016). Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Patanamon Thongtanunam, Shane McIntosh, Ahmed E. Hassan, and Hajimu Iida. 2016. Revisiting Code Ownership and Its Relationship with Software Quality in the Scope of Modern Code Review. In Proceedings of the 38th International Conference on Software Engineering (ICSE ’16). ACM, New York, NY, USA, 1039– 1050. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Patanamon Thongtanunam, Chakkrit Tantithamthavorn, Raula Gaikovina Kula, Norihiro Yoshida, Hajimu Iida, and Ken Ichi Matsumoto. 2015. Who should review my code? A file location-based code-reviewer recommendation approach for Modern Code Review. In IEEE International Conference on Software Analysis, Evolution and Reengineering. 141–150.Google ScholarGoogle ScholarCross RefCross Ref
  26. Xiaoyin Wang, Lu Zhang, Tao Xie, Yingfei Xiong, and Hong Mei. 2012. Automating presentation changes in dynamic web applications via collaborative hybrid analysis. In ACM Sigsoft International Symposium on the Foundations of Software Engineering. 1–11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Xia Xin, Lo David, Wang Xinyu, and Yang Xiaohu. 2015. Who should review this change?: Putting text and file location analyses together for more accurate recommendations. In 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME). 261–270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Yingfei Xiong, Jie Wang, Runfa Yan, Jiachen Zhang, Shi Han, Gang Huang, and Lu Zhang. 2017. Precise condition synthesis for program repair. In Ieee/acm International Conference on Software Engineering. 416–426. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Man Yichuan, Gao Cuiyun, R. Lyu Michael, and Jiang Jiuchun. 2016. Experience Report: Understanding Cross-Platform App Issues from User Reviews. In 2016 IEEE 27th International Symposium on Software Reliability Engineering (ISSRE). 138–149.Google ScholarGoogle Scholar
  30. Huang Yuan, Zheng Qiaoyang, Chen Xiangping, Xiong Yingfei, Liu Zhiyong, and Luo Xiaonan. 2017. Mining Version Control System for Automatically Generating Commit Comment. In 2017 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM). 414–423. org/10.1109/ESEM.2017.56 Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Huang Yuan, Chen Xiangping, Zou Qiwen, and Luo Xiaonan. 2014. A Probabilistic Neural Network-Based Approach for Related Software Changes Detection. In 21st Asia-Pacific Software Engineering Conference, APSEC 2014, Jeju, South Korea, December 1-4, 2014. Volume 1: Research Papers. 279–286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Ruru Yue, Zhe Gao, Na Meng, Yingfei Xiong, Xiaoyin Wang, and J. David Morgenthaler. 2018.Google ScholarGoogle Scholar
  33. Automatic Clone Recommendation for Refactoring Based on the Present and the Past. CoRR abs/1807.11184 (2018). arXiv: 1807.11184 http://arxiv.org/abs/1807.11184Google ScholarGoogle Scholar

Index Terms

  1. Salient-class location: help developers understand code change in code review

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEC/FSE 2018: Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering
      October 2018
      987 pages
      ISBN:9781450355735
      DOI:10.1145/3236024

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 26 October 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate112of543submissions,21%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader