skip to main content
10.1145/1518701.1518946acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Comparison of three one-question, post-task usability questionnaires

Published:04 April 2009Publication History

ABSTRACT

Post-task ratings of difficulty in a usability test have the potential to provide diagnostic information and be an additional measure of user satisfaction. But the ratings need to be reliable as well as easy to use for both respondents and researchers. Three one-question rating types were compared in a study with 26 participants who attempted the same five tasks with two software applications. The types were a Likert scale, a Usability Magnitude Estimation (UME) judgment, and a Subjective Mental Effort Question (SMEQ). All three types could distinguish between the applications with 26 participants, but the Likert and SMEQ types were more sensitive with small sample sizes. Both the Likert and SMEQ types were easy to learn and quick to execute. The online version of the SMEQ question was highly correlated with other measures and had equal sensitivity to the Likert question type.

Skip Supplemental Material Section

Supplemental Material

Video

References

  1. Brooke, J. (1996). SUS: A Quick and Dirty Usability Scale. In: P.W. Jordan, B. Thomas, B.A. Weerdmeester&I.L. McClelland (Eds.), Usability Evaluation in Industry. London: Taylor&Francis, 189--194.Google ScholarGoogle Scholar
  2. Kirakowski, J.&Cierlik, B. (1998). Measuring the Usability of Web Sites, Proceedings of the Human factors and Ergonomics Society 42nd Annual Meeting, 424--428.Google ScholarGoogle ScholarCross RefCross Ref
  3. Lewis, J. R. (1991). Psychometric evaluation of an after-scenario questionnaire for computer usability. studies: The ASQ. SIGCHI Bulletin, 23, 1, 78--81. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Lewis, J. R. (2002). Psychometric evaluation of the PSSUQ using data from five years of usability studies. International Journal of Human-Computer Interaction, 14, 463--488.Google ScholarGoogle ScholarCross RefCross Ref
  5. McGee, M. (2004). Master Usability Scaling: Magnitude Estimation and Master Scaling Applied to Usability Measurement. Proc. CHI 2004 ACM Press (2004), pp. 335--342 Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Nunnally, J. C. (1978). Psychometric theory. New York: McGraw-Hill.Google ScholarGoogle Scholar
  7. Sauro, J.&Lewis, J.R.(2009)"Correlations among Prototypical Usability Metrics: Evidence for the Construct of Usability"Proc. CHI 2009 In Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Tedesco, D.&Tullis, T. (2006). A Comparison of Methods for Eliciting Post-Task Subjective Ratings in Usability Testing. Usability Professionals Association (UPA), 2006, 1--9.Google ScholarGoogle Scholar
  9. Tullis, T. and Stetson, J. (2004). A Comparison of Questionnaires for Assessing Website Usability. Usability Professionals Association (UPA), 2004, 7--11.Google ScholarGoogle Scholar
  10. Zijlstra, F. (1993). Efficiency in work behavior. A design approach for modern tools. PhD thesis, Delft University of Technology. Delft, The Netherlands: Delft University Press.Google ScholarGoogle Scholar
  11. Zijlstra, F.R.H&Doorn, L. van (1985). The construction of a scale to measure subjective effort. Technical Report, Delft University of Technology, Department of Philosophy and Social Sciences.Google ScholarGoogle Scholar

Index Terms

  1. Comparison of three one-question, post-task usability questionnaires

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '09: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
      April 2009
      2426 pages
      ISBN:9781605582467
      DOI:10.1145/1518701

      Copyright © 2009 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 4 April 2009

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CHI '09 Paper Acceptance Rate277of1,130submissions,25%Overall Acceptance Rate6,199of26,314submissions,24%

      Upcoming Conference

      CHI '24
      CHI Conference on Human Factors in Computing Systems
      May 11 - 16, 2024
      Honolulu , HI , USA

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader