skip to main content
research-article
Public Access

Wordometer Systems for Everyday Life

Authors Info & Claims
Published:08 January 2018Publication History
Skip Abstract Section

Abstract

We present in this paper a detailed comparison of different algorithms and devices to determine the number of words read in everyday life. We call our system the “Wordometer”. We used three kinds of eye tracking systems in our experiment: mobile video-oculography (MVoG); stationary video-oculography (SVoG); and electro-oculography (EoG). By analyzing the movement of the eyes we were able to estimate the number of words that a user read. Recently, inexpensive eye trackers have appeared on the market. Thus, we undertook a large-scale experiment that compared three devices that can be used for daily reading on a screen: the Tobii Eye X SVoG; the JINS MEME EoG; and the Pupil MVoG. We found that the accuracy of the everyday life devices and professional devices was similar when used with the Wordometer. We analyzed the robustness of the systems for special reading behaviors: rereading and skipping.

With the MVoG, SVoG and EoG systems, we obtained estimation errors respectively, 7.2%, 13.0%, and 10.6% in our main experiment. In all our experiments, we obtained 300 recordings by 14 participants, which amounted to 109,097 read words.

References

  1. C. Gurrin, A. F. Smeaton, and A. R. Doherty, “Lifelogging: Personal big data,” Foundations and trends in information retrieval, vol. 8, no. 1, pp. 1--125, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. E. K. Choe, N. B. Lee, B. Lee, W. Prat, and J. A. Kientz, “Understanding quantified-selfers' practices in collecting and exploring personal data,” in Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, 2014, pp. 1143--1152. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. P. M. Hurvitz, A. V. Moudon, B. Kang, B. E. Saelens, and G. E. Duncan, “Emerging technologies for assessing physical activity behaviors in space and time,” Emerging Technologies to Promote and Evaluate Physical Activity, p. 8, 2014.Google ScholarGoogle Scholar
  4. K. Kitamura, T. Yamasaki, and K. Aizawa, “Food log by analyzing food images,” in Proceedings of the 16th ACM international conference on Multimedia. ACM, 2008, pp. 999--1000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. J.-K. Min, A. Doryab, J. Wiese, S. Amini, J. Zimmerman, and J. I. Hong, “Toss'n'turn: smartphone as sleep and sleep quality detector,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2014, pp. 477--486.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. S. Abdullah, E. L. Murnane, M. Mathews, M. Kay, J. A. Kientz, G. Gay, and T. Choudhury, “Cognitive rhythms: unobtrusive and continuous sensing of alertness using a mobile phone,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 2016, pp. 178--189. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. T. Terenzini, L. Springer, E. T. Pascarella, and A. Nora, “Influences affecting the development of students' critical thinking skills,” Research in higher education, vol. 36, no. 1, pp. 23--39, 1995. Google ScholarGoogle ScholarCross RefCross Ref
  8. O. Augereau, K. Kise, and K. Hoshika, “A proposal of a document image reading-life log based on document image retrieval and eyetracking,” in Document Analysis and Recognition (ICDAR), 2015 13th International Conference on. IEEE, 2015, pp. 246--250. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K. Kunze, H. Kawaichi, K. Yoshimura, and K. Kise, “The wordometer--estimating the number of words read using document image retrieval and mobile eye tracking,” in Document Analysis and Recognition (ICDAR), 2013 12th International Conference on. IEEE, 2013, pp. 25--29. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. S. Ishimaru, K. Kunze, K. Kise, and A. Dengel, “The wordometer 2.0: estimating the number of words you read in real life using commercial eog glasses,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. ACM, 2016, pp. 293--296. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. K. Rayner, T. J. Slatery, and N. N. Bélanger, “Eye movements, the perceptual span, and reading speed,” Psychonomic bulletin 8 review, vol. 17, no. 6, pp. 834--839, 2010.Google ScholarGoogle Scholar
  12. M. Shelhamer and D. C. Roberts, “Chapter 6 - magnetic scleral search coil,” in Vertigo and Imbalance: Clinical Neurophysiology of the Vestibular System, ser. Handbook of Clinical Neurophysiology. Elsevier, 2010, vol. 9, pp. 80--87. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1567423110090064Google ScholarGoogle ScholarCross RefCross Ref
  13. R. Barea, L. Boquete, S. Ortega, E. López, and J. Rodríguez-Ascariz, “Eog-based eye movements codification for human computer interaction,” Expert Systems with Applications, vol. 39, no. 3, pp. 2677--2683, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. Bulling, J. A. Ward, H. Gellersen, and G. Troster, “Eye movement analysis for activity recognition using electrooculography,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 4, pp. 741--753, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. C. Galdi, M. Nappi, D. Riccio, and H. Wechsler, “Eye movement analysis for human authentication: Critical survey,” Pattern Recognition Letters, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. K. Kunze, K. Masai, M. Inami, Ö. Sacakli, M. Liwicki, A. Dengel, S. Ishimaru, and K. Kise, “Qantifying reading habits: counting how many words you read,” in Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 2015, pp. 87--96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Y. Shiga, T. Toyama, Y. Utsumi, K. Kise, and A. Dengel, “Daily activity recognition combining gaze motion and visual features,” in Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication. ACM, 2014, pp. 1103--1111. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. K. Ogaki, K. M. Kitani, Y. Sugano, and Y. Sato, “Coupling eye-motion and ego-motion features for first-person activity recognition,” in 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 2012, pp. 1--7. Google ScholarGoogle ScholarCross RefCross Ref
  19. A. I. Adiba, N. Tanaka, and J. Miyake, “An adjustable gaze tracking system and its application for automatic discrimination of interest objects,” IEEE/ASME Transactions on Mechatronics, vol. 21, no. 2, pp. 973--979, 2016. Google ScholarGoogle ScholarCross RefCross Ref
  20. C. Holland and O. V. Komogortsev, “Biometric identification via eye movement scanpaths in reading,” in Biometrics (IJCB), 2011 International Joint Conference on. IEEE, 2011, pp. 1--8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. K. Rayner, K. H. Chace, T. J. Slatery, and J. Ashby, “Eye movements as reflections of comprehension processes in reading,” Scientific Studies of Reading, vol. 10, no. 3, pp. 241--255, 2006. Google ScholarGoogle ScholarCross RefCross Ref
  22. O. Augereau, H. Fujiyoshi, K. Kunze, and K. Kise, “Estimation of english skill with a mobile eye tracker,” in Adjunct Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2016 ACM International Symposium on Wearable Computers. ACM, 2016, pp. 1777--1781. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. O. Augereau, H. Fujiyoshi, and K. Kise, “Towards an automated estimation of english skill via toeic score based on reading analysis,” in Pattern Recognition (ICPR), 2016 23rd International Conference on. IEEE, 2016, pp. 1285--1290. Google ScholarGoogle ScholarCross RefCross Ref
  24. R. Biedert, A. Dengel, M. Elshamy, and G. Buscher, “Towards robust gaze-based objective quality measures for text,” in Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 2012, pp. 201--204. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. K. Kunze, Y. Utsumi, Y. Shiga, K. Kise, and A. Bulling, “I know what you are reading: recognition of document types using mobile eye tracking,” in Proceedings of the 17th annual international symposium on International symposium on wearable computers. ACM, 2013, pp. 113--116. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. M. Pentinen and E. Huovinen, “The early development of sight-reading skills in adulthood a study of eye movements,” Journal of Research in Music Education, vol. 59, no. 2, pp. 196--220, 2011. Google ScholarGoogle ScholarCross RefCross Ref
  27. H. R. Gudmundsdotir, “Advances in music-reading research,” Music Education Research, vol. 12, no. 4, pp. 331--338, 2010. Google ScholarGoogle ScholarCross RefCross Ref
  28. C. Rigaud, T.-N. Le, J.-C. Burie, J.-M. Ogier, S. Ishimaru, M. Iwata, and K. Kise, “Semi-automatic text and graphics extraction of manga using eye tracking information,” in 2016 12th IAPR Workshop on Document Analysis Systems (DAS). IEEE, 2016, pp. 120--125. Google ScholarGoogle ScholarCross RefCross Ref
  29. K. Rayner, “Eye movements in reading and information processing: 20 years of research.” Psychological bulletin, vol. 124, no. 3, pp. 372--422, 1998. Google ScholarGoogle ScholarCross RefCross Ref
  30. G. Buscher, A. Dengel, and L. van Elst, “Eye movements as implicit relevance feedback,” in CHI‘08 extended abstracts on Human factors in computing systems. ACM, 2008, pp. 2991--2996.Google ScholarGoogle Scholar
  31. D. D. Salvucci and J. H. Goldberg, “Identifying fixations and saccades in eye-tracking protocols,” in Proceedings of the 2000 symposium on Eye tracking research 8 applications. ACM, 2000, pp. 71--78. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. M. A. Case, H. A. Burwick, K. G. Volpp, and M. S. Patel, “Accuracy of smartphone applications and wearable devices for tracking physical activity data,” Jama, vol. 313, no. 6, pp. 625--626, 2015. Google ScholarGoogle ScholarCross RefCross Ref
  33. C. Lennon and H. Burdick, “The lexile framework as an approach for reading measurement and success,” electronic publication on www.lexile.com, 2004.Google ScholarGoogle Scholar
  34. A. Gibaldi, M. Vanegas, P. J. Bex, and G. Maiello, “Evaluation of the tobii eyex eye tracking controller and matlab toolkit for research,” Behavior Research Methods, pp. 1--24, 2016.Google ScholarGoogle Scholar
  35. R. Biedert, J. Hees, A. Dengel, and G. Buscher, “A robust realtime reading-skimming classifier,” in Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 2012, pp. 123--130. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. F. Ehrler, C. Weber, and C. Lovis, “Influence of pedometer position on pedometer accuracy at various walking speeds: A comparative study,” Journal of medical Internet research, vol. 18, no. 10, 2016. Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Wordometer Systems for Everyday Life

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
        Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 1, Issue 4
        December 2017
        1298 pages
        EISSN:2474-9567
        DOI:10.1145/3178157
        Issue’s Table of Contents

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 8 January 2018
        • Accepted: 1 October 2017
        • Revised: 1 July 2017
        • Received: 1 May 2017
        Published in imwut Volume 1, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader