skip to main content
10.1145/3243218.3243219acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

Configurations in Android testing: they matter

Published:04 September 2018Publication History

ABSTRACT

Android has rocketed to the top of the mobile market thanks in large part to its open source model. Vendors use Android for their devices for free, and companies make customizations to suit their needs. This has resulted in a myriad of configurations that are extant in the user space today. In this paper, we show that differences in configurations, if ignored, can lead to differences in test outputs and code coverage. Consequently, researchers who develop new testing techniques and evaluate them on only one or two configurations are missing a necessary dimension in their experiments and developers who ignore this may release buggy software. In a large study on 18 apps across 88 configurations, we show that only one of the 18 apps studied showed no variation at all. The rest showed variation in either, or both, code coverage and test results. 15% of the 2,000 plus test cases across all of the apps vary, and some of the variation is subtle, i.e. not just a test crash. Our results suggest that configurations in Android testing do matter and that developers need to test using configuration-aware techniques.

References

  1. Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Bryan Dzung Ta, and Atif M. Memon. 2014. MobiGUITAR – A Tool for Automated Model-Based Testing of Mobile Apps. IEEE Software (2014).Google ScholarGoogle Scholar
  2. Tanzirul Azim and Iulian Neamtiu. 2013. Targeted and Depth-first Exploration for Systematic Testing of Android Apps. In Intl. Conf. on Object Oriented Programming Systems Languages & Applications (OOPSLA). 641–660. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Wontae Choi, George Necula, and Koushik Sen. 2013. Guided GUI Testing of Android Apps with Minimal Restart and Approximate Learning. In Intl. Conf. on Obj. Orient. Prog. Syst. Langs. and Apps. (OOPSLA). 623–640. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Tiago Coelho, Bruno Lima, and João Pascoal Faria. 2016. MT4A: A Noprogramming Test Automation Framework for Android Applications. In Workshop on Automating Test Case Design, Selection, and Evaluation (A-TEST). 59–65. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Fazzini, E. N. D. A. Freitas, S. R. Choudhary, and A. Orso. 2017. Barista: A Technique for Recording, Encoding, and Running Platform Independent Android Tests. In Intl. Conf. on Software Testing, Verification and Validation (ICST). 149–160.Google ScholarGoogle Scholar
  6. Nicolas Fußberger, Bo Zhang, and Martin Becker. 2017. A Deep Dive into Android’s Variability Realizations. In Intl. Systems and Software Product Line Conf. (SPLC). 69–78. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Google. 2017. Firebase Test Lab for Android Overview. Retrieved Aug 16, 2017 from https://firebase.google.com/docs/test-lab/overviewGoogle ScholarGoogle Scholar
  8. Matthew Halpern, Yuhao Zhu, Ramesh Peri, and Vijay Janapa Reddi. 2015. Mosaic: cross-platform user-interaction record and replay for the fragmented android ecosystem. In Intl. Symp on Perf. Anal. of Syst. and Soft. (ISPASS). 215–224.Google ScholarGoogle ScholarCross RefCross Ref
  9. Dan Han, Chenlei Zhang, Xiaochao Fan, Abram Hindle, Kenny Wong, and Eleni Stroulia. 2012. Understanding Android fragmentation with topic analysis of vendor-specific bugs. In Reverse Engineering, Working Conf. on. 83–92. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Ajay Kumar Jha, Sunghee Lee, and Woo Jin Lee. 2017. Developer Mistakes in Writing Android Manifests: An Empirical Study of Configuration Errors. In Intl. Conf. on Mining Software Repositories (MSR). 25–36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Hammad Khalid, Meiyappan Nagappan, Emad Shihab, and Ahmed E Hassan. 2014. Prioritizing the devices to test your app on: A case study of android game apps. In Intl. Symp. on Foundations of Software Engineering. 610–620. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Yepang Liu, Chang Xu, and Shing-Chi Cheung. 2014. Characterizing and detecting performance bugs for smartphone applications. In Intl. Conf. on Software Engineering. 1013–1024. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Aravind Machiry, Rohan Tahiliani, and Mayur Naik. 2013. Dynodroid: An Input Generation System for Android Apps. In Joint Meeting on Foundations of Software Engineering (ESEC/FSE). 224–234. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Riyadh Mahmood, Nariman Mirzaei, and Sam Malek. 2014. EvoDroid: Segmented Evolutionary Testing of Android Apps. In Intl. Symp. on Foundations of Software Engineering (FSE). 599–609. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Ke Mao, Mark Harman, and Yue Jia. 2016. Sapienz: Multi-objective Automated Testing for Android Applications. In Intl. Symp. on Software Testing and Analysis (ISSTA). 94–105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Xiao Qu, Myra B. Cohen, and Gregg Rothermel. 2008. Configuration-aware Regression Testing: An Empirical Study of Sampling and Prioritization. In Intl. Symp. on Software Testing and Analysis (ISSTA). 75–86. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Alireza Sadeghi, Reyhaneh Jabbarvand, and Sam Malek. 2017. PATDroid: Permission-aware GUI Testing of Android. In Joint Meeting on Foundations of Software Engineering (ESEC/FSE). 220–232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Atri Sarkar, Jianmei Guo, Norbert Siegmund, Sven Apel, and Krzysztof Czarnecki. 2015. Cost-Efficient Sampling for Performance Prediction of Configurable Systems. In Intl. Conf. on Automated Software Engineering. 342–352.Google ScholarGoogle Scholar
  19. Sergiy Vilkomir, Katherine Marszalkowski, Chauncey Perry, and Swetha Mahendrakar. 2015. Effectiveness of multi-device testing mobile applications. In Intl. Conf. on Mobile Software Engineering and Systems. 44–47. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Lili Wei, Yepang Liu, and Shing-Chi Cheung. 2016. Taming Android fragmentation: Characterizing and detecting compatibility issues for Android apps. In Intl. Conf. on Automated Software Engineering. 226–237. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Configurations in Android testing: they matter

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      A-Mobile 2018: Proceedings of the 1st International Workshop on Advances in Mobile App Analysis
      September 2018
      34 pages
      ISBN:9781450359733
      DOI:10.1145/3243218

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 4 September 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader