skip to main content
10.1145/2961111.2962596acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article
Public Access

Detection of Requirement Errors and Faults via a Human Error Taxonomy: A Feasibility Study

Published:08 September 2016Publication History

ABSTRACT

Background: Developing correct software requirements is important for overall software quality. Most existing quality improvement approaches focus on detection and removal of faults (i.e. problems recorded in a document) as opposed identifying the underlying errors that produced those faults. Accordingly, developers are likely to make the same errors in the future and fail to recognize other existing faults with the same origins. Therefore, we have created a Human Error Taxonomy (HET) to help software engineers improve their software requirement specification (SRS) documents. Aims: The goal of this paper is to analyze whether the HET is useful for classifying errors and for guiding developers to find additional faults. Methods: We conducted a empirical study in a classroom setting to evaluate the usefulness and feasibility of the HET. Results: First, software developers were able to employ error categories in the HET to identify and classify the underlying sources of faults identified during the inspection of SRS documents. Second, developers were able to use that information to detect additional faults that had gone unnoticed during the initial inspection. Finally, the participants had a positive impression about the usefulness of the HET. Conclusions: The HET is effective for identifying and classifying requirements errors and faults, thereby helping to improve the overall quality of the SRS and the software.

References

  1. B. Boehm and V. R. Basili. Software defect reduction top 10 list. Foundations of empirical software engineering: the legacy of Victor R. Basili, 426, 2005.Google ScholarGoogle Scholar
  2. J. Carver. The impact of background and experience on software inspections. Empirical Software Engineering, 9(3):259--262, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. R. Chillarege, I. S. Bhandari, J. K. Chaar, M. J. Halliday, D. S. Moebus, B. K. Ray, and M.-Y. Wong. Orthogonal defect classification-a concept for in-process measurements. Software Engineering, IEEE Transactions on, 18(11):943--956, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. B. Freimut, C. Denger, and M. Ketterer. An industrial case study of implementing and validating defect classification for process improvement and quality management. In Software Metrics, 2005. 11th IEEE International Symposium, pages 9--19. IEEE, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. J. H. Hayes. Building a requirement fault taxonomy: Experiences from a nasa verification and validation research project. In Software Reliability Engineering, 2003. ISSRE 2003. 14th International Symposium on, pages 49--59. IEEE, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. S. Kraemer and P. Carayon. Human errors and violations in computer and information security: The viewpoint of network administrators and security specialists. Applied ergonomics, 38(2):143--154, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  7. F. Lanubile, F. Shull, and V. R. Basili. Experimenting with error abstraction in requirements documents. In Software Metrics Symposium, 1998. Metrics 1998. Proceedings. Fifth International, pages 114--121. IEEE, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C. Lawrence and I. Kosuke. Design error classification and knowledge management. Journal of Knowledge Management Practice, 10(9):72--81, 2004.Google ScholarGoogle Scholar
  9. M. Leszak, D. E. Perry, and D. Stoll. A case study in root cause defect analysis. In Proceedings of the 22nd international conference on Software engineering, pages 428--437. ACM, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. E. R. F. Lopes and C. H. Q. Forster. Application of human error theories for the process improvement of requirements engineering. Information Sciences, 250:142--161, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  11. C. Masuck. Incorporating a fault categorization and analysis process in the software build cycle. Journal of Computing Sciences in Colleges, 20(5):239--248, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. A. A. Porter, L. G. Votta Jr, and V. R. Basili. Comparing detection methods for software requirements inspections: A replicated experiment. Software Engineering, IEEE Transactions on, 21(6):563--575, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. Reason. Human error. Cambridge university press, 1990.Google ScholarGoogle Scholar
  14. C. Trevor, S. Jim, C. Judith, and K. Brain. Human error in the software generation process. University of Technology, Loughborough, England, 1994.Google ScholarGoogle Scholar
  15. G. S. Walia, J. Carver, and T. Philip. Requirement error abstraction and classification: an empirical study. In Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering, pages 336--345. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. G. S. Walia and J. C. Carver. A systematic literature review to identify and classify software requirement errors. Information and Software Technology, 51(7):1087--1109, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. G. S. Walia and J. C. Carver. Using error abstraction and classification to improve requirement quality: conclusions from a family of four empirical studies. Empirical Software Engineering, 18(4):625--658, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. J. C. Westland. The cost of errors in software development: evidence from industry. Journal of Systems and Software, 62(1):1--9, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ESEM '16: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
    September 2016
    457 pages
    ISBN:9781450344272
    DOI:10.1145/2961111

    Copyright © 2016 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 September 2016

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    ESEM '16 Paper Acceptance Rate27of122submissions,22%Overall Acceptance Rate130of594submissions,22%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader