skip to main content
10.1145/2678025.2701386acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Rhema: A Real-Time In-Situ Intelligent Interface to Help People with Public Speaking

Published:18 March 2015Publication History

ABSTRACT

A large number of people rate public speaking as their top fear. What if these individuals were given an intelligent interface that provides live feedback on their speaking skills? In this paper, we present Rhema, an intelligent user interface for Google Glass to help people with public speaking. The interface automatically detects the speaker's volume and speaking rate in real time and provides feedback during the actual delivery of speech. While designing the interface, we experimented with two different strategies of information delivery: 1) Continuous streams of information, and 2) Sparse delivery of recommendation. We evaluated our interface with 30 native English speakers. Each participant presented three speeches (avg. duration 3 minutes) with 2 different feedback strategies (continuous, sparse) and a baseline (no feeback) in a random order. The participants were significantly more pleased (p < 0.05) with their speech while using the sparse feedback strategy over the continuous one and no feedback.

References

  1. Batrinca, L., Stratou, G., Shapiro, A., Morency, L.-P., and Scherer, S. Cicero - towards a multimodal virtual audience platform for public speaking training. Intelligent Virtual Agents, Springer (2013), 116--128.Google ScholarGoogle Scholar
  2. Biocca, F., Owen, C., Tang, A., and Bohil, C. Attention Issues in Spatial Information Systems: Directing Mobile Users' Visual Attention Using Augmented Reality. Journal of Management Information Systems 23, 2007, 163--184. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Boersma, P. Accurate short-term analysis of the fundamental frequency and the harmonics-to-noise ratio of a sampled sound. IFA Proceedings 17, (1993), 97-- 110.Google ScholarGoogle Scholar
  4. Boersma, Paul; Weenink, D. Praat: Doing Phonetics by Computer. Ear and Hearing 32, 2011, 266.Google ScholarGoogle ScholarCross RefCross Ref
  5. Chollet, M. et al. An interactive virtual audience platform for public speaking training. Autonomous agents and multi-agent systems, (2014), 1657--1658. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Cohen, J. A coefficient of agreement of nominal scales. Educational and Psychological Measurement 20, (1960), 37--46.Google ScholarGoogle ScholarCross RefCross Ref
  7. Dunn, O. J. Multiple Comparisons Among Means. Journal of the American Statistical Association 56, (1961), 52--64.Google ScholarGoogle ScholarCross RefCross Ref
  8. Finlay, D. Motion perception in the peripheral visual field. Perception 11, 1982, 457--462.Google ScholarGoogle ScholarCross RefCross Ref
  9. Friedman, M. A comparison of alternative tests of significance for the problem of m rankings. The Annals of Mathematical Statistics 11, 1 (1940), 86--92.Google ScholarGoogle ScholarCross RefCross Ref
  10. Ha, K., Chen, Z., Hu, W., Richter, W., Pillai, P., and Satyanarayanan, M. Towards wearable cognitive assistance. MobiSys '14, (2014), 68--81. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Hoogterp, B. Your Perfect Presentation: Speak in Front of Any Audience Anytime Anywhere and Never Be Nervous Again. McGraw Hill Professional, 2014.Google ScholarGoogle Scholar
  12. Hoque, M., Courgeon, M., and Martin, J. Mach: My automated conversation coach. Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, (2013), 697--706. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Koppensteiner, M. and Grammer, K. Motion patterns in political speech and their influence on personality ratings. Journal of Research in Personality 44, (2010), 374--379.Google ScholarGoogle ScholarCross RefCross Ref
  14. Krippendorff, K. Content Analysis: An Introduction to Its Methodology. 2004.Google ScholarGoogle Scholar
  15. McAtamney, G. and Parker, C. An examination of the effects of a wearable display on informal face-to-face communication. Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems, (2006), 45--54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. McCrickard, D. S., Catrambone, R., Chewar, C. M., and Stasko, J. T. Establishing tradeoffs that leverage attention for utility: Empirically evaluating information display in notification systems. International Journal of Human Computer Studies 58, (2003), 547--582. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. North, M. M., North, S. M., Coble, J. R., et al. Virtual reality therapy: an effective treatment for the fear of public speaking. The International Journal of Virtual Reality 3 (1998), 1 3, (1998).Google ScholarGoogle ScholarCross RefCross Ref
  18. Ofek, E., Iqbal, S. T., and Strauss, K. Reducing disruption from subtle information delivery during a conversation: mode and bandwidth investigation. Proceedings of CHI 2013, (2013), 3111--3120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Pashler, H. Dual-task interference in simple tasks: data and theory. Psychological bulletin 116, (1994).Google ScholarGoogle Scholar
  20. Shiffrin, R. M. and Gardner, G. T. Visual processing capacity and attentional control. Journal of experimental psychology 93, (1972), 72--82.Google ScholarGoogle Scholar
  21. Strangert, E. and Gustafson, J. What makes a good speaker? subject ratings, acoustic measurements and perceptual evaluations. INTERSPEECH, (2008), 1688--1691.Google ScholarGoogle Scholar
  22. Strayer, D. L., Drews, F. A., and Crouch, D. J. Fatal distraction? A comparison of the cell-phone driver and the drunk driver. Human Factors: The Journal of the Human Factors and Ergonomics Society 48, (2006), 381--391.Google ScholarGoogle ScholarCross RefCross Ref
  23. Teeters, A., Kaliouby, R. El, and Picard, R. Self-Cam: feedback from what would be your social partner. ACM SIGGRAPH 2006 Research posters, (2006). Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Wallechinsky, D. The book of lists. Canongate Books, 2009.Google ScholarGoogle Scholar
  25. Wilcoxon, F. Individual comparisons of grouped data by ranking methods. Journal of economic entomology 39, (1946), 269.Google ScholarGoogle Scholar
  26. Zhang, Z. Microsoft kinect sensor and its effect. MultiMedia, IEEE 19, 2 (2012), 4--10. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Rhema: A Real-Time In-Situ Intelligent Interface to Help People with Public Speaking

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      IUI '15: Proceedings of the 20th International Conference on Intelligent User Interfaces
      March 2015
      480 pages
      ISBN:9781450333061
      DOI:10.1145/2678025

      Copyright © 2015 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 March 2015

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      IUI '15 Paper Acceptance Rate47of205submissions,23%Overall Acceptance Rate746of2,811submissions,27%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader