ABSTRACT
Graphical User Interfaces (GUIs) are amongst the most common user interfaces, enabling interactions with applications through mouse movements and key presses. Tools for automated testing of programs through their GUI exist, however they usually rely on operating system or framework specific knowledge to interact with an application. Due to frequent operating system updates, which can remove required information, and a large variety of different GUI frameworks using unique underlying data structures, such tools rapidly become obsolete, Consequently, for an automated GUI test generation tool, supporting many frameworks and operating systems is impractical. We propose a technique for improving GUI testing by automatically identifying GUI widgets in screen shots using machine learning techniques. As training data, we generate randomized GUIs to automatically extract widget information. The resulting model provides guidance to GUI testing tools in environments not currently supported by deriving GUI widget information from screen shots only. In our experiments, we found that identifying GUI widgets in screen shots and using this information to guide random testing achieved a significantly higher branch coverage in 18 of 20 applications, with an average increase of 42.5% when compared to conventional random testing.
- Android Developers. 2015. Monkeyrunner.Google Scholar
- M. Bajammal and A. Mesbah. 2018. Web Canvas Testing Through Visual Inference. In 2018 IEEE 11th International Conference on Software Testing, Verification and Validation (ICST). 193–203.Google Scholar
- S. Bauersfeld and T. E. J. Vos. 2012. GUITest: A Java Library for Fully Automated GUI Robustness Testing. In 2012 Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering. 330–333. 2351676.2351739 Google ScholarDigital Library
- Giovanni Becce, Leonardo Mariani, Oliviero Riganelli, and Mauro Santoro. 2012. Extracting Widget Descriptions from GUIs. In Fundamental Approaches to Software Engineering, Juan de Lara and Andrea Zisman (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 347–361. Google ScholarDigital Library
- Nataniel P. Borges, Jr., Maria Gómez, and Andreas Zeller. 2018. Guiding App Testing with Mined Interaction Models. In Proceedings of the 5th International Conference on Mobile Software Engineering and Systems (MOBILESoft ’18). ACM, New York, NY, USA, 133–143. Google ScholarDigital Library
- Hatim Chahim, Mehmet Duran, and Tanja EJ Vos. 2018. Challenging TESTAR in an industrial setting: the rail sector. (2018).Google Scholar
- Shauvik Roy Choudhary, Alessandra Gorla, and Alessandro Orso. 2015. Automated Test Input Generation for Android:Are We There Yet? (E). In 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 429–440.Google ScholarDigital Library
- J. Ding, B. Chen, H. Liu, and M. Huang. 2016. Convolutional Neural Network With Data Augmentation for SAR Target Recognition. IEEE Geoscience and Remote Sensing Letters 13, 3 (March 2016), 364–368.Google Scholar
- 2513754Google Scholar
- Justin E Forrester and Barton P Miller. 2000. An Empirical Study of the Robustness of Windows NT Applications Using Random Testing. In Proceedings of the 4th USENIX Windows System Symposium. Seattle, 59–68. Google ScholarDigital Library
- Gordon Fraser and Andrea Arcuri. 2014. A Large-Scale Evaluation of Automated Unit Test Generation Using EvoSuite. ACM Transactions on Software Engineering and Methodology (TOSEM) 24, 2, Article 8 (Dec. 2014), 42 pages. Google ScholarDigital Library
- Zebao Gao, Yalan Liang, Myra B Cohen, Atif M Memon, and Zhen Wang. 2015. Making system user interactive tests repeatable: When and what should we control?. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 1. IEEE, 55–65. Google ScholarDigital Library
- Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’14). IEEE Computer Society, Washington, DC, USA, 580–587. Google ScholarDigital Library
- R. Lo, R. Webby, and R. Jeffery. 1996. Sizing and Estimating the Coding and Unit Testing Effort for GUI Systems. In Proceedings of the 3rd International Software Metrics Symposium. 166–173. Google ScholarDigital Library
- Ke Mao, Mark Harman, and Yue Jia. 2016. Sapienz: Multi-objective Automated Testing for Android Applications. In Proceedings of the 25th International Symposium on Software Testing and Analysis. ACM, 94–105. Google ScholarDigital Library
- L. Mariani, M. PezzÃĺ, O. Riganelli, and M. Santoro. 2011. AutoBlackTest: A Tool for Automatic Black-Box Testing. (May 2011), 1013–1015. Google ScholarDigital Library
- 1145/1985793.1985979Google Scholar
- A. Memon, I. Banerjee, and A. Nagarajan. 2003. GUI Ripping: Reverse Engineering of Graphical User Interfaces for Testing. In 10th Working Conference on Reverse Engineering, 2003. WCRE 2003. Proceedings. 260–269. WCRE.2003.1287256 Google ScholarDigital Library
- Bao Nguyen, Bryan Robbins, Ishan Banerjee, and Atif Memon. 2014. GUITAR: An innovative tool for automated testing of GUI-driven software. Automated Software Engineering 21 (03 2014). Google ScholarDigital Library
- Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, and Ali Farhadi. 2015. You Only Look Once: Unified, Real-Time Object Detection. CoRR abs/1506.02640 (2015). arXiv: 1506.02640 http://arxiv.org/abs/1506.02640Google Scholar
- Joseph Redmon and Ali Farhadi. 2016. YOLO9000: Better, Faster, Stronger. CoRR abs/1612.08242 (2016). arXiv: 1612.08242 http://arxiv.org/abs/1612.08242Google Scholar
- Joseph Redmon and Ali Farhadi. 2018. YOLOv3: An Incremental Improvement. CoRR abs/1804.02767 (2018). arXiv: 1804.02767 http://arxiv.org/abs/1804.02767Google Scholar
- O. Russakovsky, L. Li, and L. Fei-Fei. 2015. Best of both worlds: Human-machine collaboration for object annotation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2121–2131.Google Scholar
- 7298824Google Scholar
- J. Salamon and J. P. Bello. 2017. Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification. IEEE Signal Processing Letters 24, 3 (March 2017), 279–283.Google ScholarCross Ref
- Ting Su, Guozhu Meng, Yuting Chen, Ke Wu, Weiming Yang, Yao Yao, Geguang Pu, Yang Liu, and Zhendong Su. 2017. Guided, Stochastic Model-Based GUI Testing of Android Apps. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. ACM, 245–256. Google ScholarDigital Library
Index Terms
- Improving random GUI testing with image-based widget detection
Recommendations
Towards a pattern language for model-based GUI testing
EuroPLoP '14: Proceedings of the 19th European Conference on Pattern Languages of ProgramsGraphical user interfaces (GUIs) have become popular as they appear in everydays' software. GUIs have become an ideal way of interacting with computer programs, making the software friendlier to its users. GUIs have grown, and so has the usage of UI ...
Coverage criteria for GUI testing
A widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing ...
Coverage criteria for GUI testing
ESEC/FSE-9: Proceedings of the 8th European software engineering conference held jointly with 9th ACM SIGSOFT international symposium on Foundations of software engineeringA widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing ...
Comments