skip to main content
10.1145/1452392.1452412acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

As go the feet...: on the estimation of attentional focus from stance

Published: 20 October 2008 Publication History

Abstract

The estimation of the direction of visual attention is critical to a large number of interactive systems. This paper investigates the cross-modal relation of the position of one's feet (or standing stance) to the focus of gaze. The intuition is that while one CAN have a range of attentional foci from a particular stance, one may be MORE LIKELY to look in specific directions given an approach vector and stance. We posit that the cross-modal relationship is constrained by biomechanics and personal style. We define a stance vector that models the approach direction before stopping and the pose of a subject's feet. We present a study where the subjects' feet and approach vector are tracked. The subjects read aloud contents of note cards in 4 locations. The order of visits' to the cards were randomized. Ten subjects read 40 lines of text each, yielding 400 stance vectors and gaze directions. We divided our data into 4 sets of 300 training and 100 test vectors and trained a neural net to estimate the gaze direction given the stance vector. Our results show that 31% our gaze orientation estimates were within 5°, 51% of our estimates were within 10°, and 60% were within 15°. Given the ability to track foot position, the procedure is minimally invasive.

References

[1]
Bekintex. 2003 {cited 2008 May 20}; http://www.bekintex.com.
[2]
Glaser, R., C. Lauterbach, D. Savio, M. Schnell, et al. Smart Carpet: A Textile-based Large-area Sensor Network. 2005 {cited 2008 May 20}; http://www.future-shape.de/publications_lauterbach/SmartFloor2005.pdf.
[3]
Measurement Specialties, I. Piezo Technical Manual, Piezoelectric Film Properties. 2002 {cited; http://www.msiusa.com/PART1-INT.pdf.
[4]
Addlesee, M.D., A.H. Jones, F. Livesey, and F.S. Samaria, ORL Active Floor. IEEE Personal Communications, 1997. 4(5): p. 35--41.
[5]
Kaddourah, Y., J. King, and A. Helal, Cost-Precision Tradeoffs in Unencumbered Floor-Based Indoor Location Tracking, in Proceedings of the third International Conference On Smart homes and health Telematic (ICOST). 2005: Sherbrooke, Québec, Canada.
[6]
Orr, R.J. and G.D. Abowd, The smart floor: a mechanism for natural user identification and tracking, in CHI '00 extended abstracts on Human Factors in Computing Systems. 2000, ACM: The Hague, The Netherlands.
[7]
Richardson, B., K. Leydon, M. Fernstrom, and J.A. Paradiso, Z-Tiles: building blocks for modular, pressure-sensing floorspaces, in CHI '04 extended abstracts on Human Factors in Computing Systems. 2004, ACM: Vienna, Austria.
[8]
Srinivasan, P., D. Birchfield, G. Qian, and A. Kidané. A Pressure Sensing Floor for Interactive Media Applications. in Proceedings of ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (ACE). 2005.
[9]
Paradiso, J.A., et al Sensor Systems for Interactive Surfaces. IBM Systems Journal, 2000a. 39 Nos.( 3 & 4): p. 892--914.
[10]
Paradiso, J.A., K.Y. Hsiao, and A. Benbasat. Interfacing to the Foot. in ACM Conference on Human Factors in Computing Systems. 2000b.
[11]
Paradiso, J.J., et al Design and Implementation of Expressive Footware. IBM Systems Journal 2000c. 39 (Nos. 3 & 4): p. 511--529.
[12]
Hiroshi, I. and K. Minoru. ClearBoard: a seamless medium for shared drawing and conversation with eye contact. in CHI '92: Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 1992: ACM.
[13]
Gutwin, C. and S. Greenberg, The Importance of Awareness for Team Cognition in Distributed Collaboration, in Team Cognition: Understanding the Factors that Drive Process and Performance, E. Salas and S.M. Fiore, Editors. 2004, APA Press: Washington. p. 177--201.
[14]
Greenberg, S., Real Time Distributed Collaboration, in Encyclopedia of Distributed Computing, P. Dasgupta and J.E. Urban, Editors. 2002, Kluwer Academic Publishers.
[15]
Jabarin, B., J. Wu, R. Vertegaal, and L. Grigorov. Establishing remote conversations through eye contact with physical awareness proxies. in CHI '03: CHI '03 extended abstracts on Human Factors in Computing Systems. 2003: ACM.
[16]
Vertegaal, R. The GAZE groupware system: mediating joint attention in multiparty communication and collaboration. in CHI '99: Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 1999: ACM.
[17]
Billinghurst, M. and H. Kato. Collaborative Mixed Reality. in Proceedings of the First International Symposium on Mixed Reality (ISMR '99). Mixed Reality -- Merging Real and Virtual Worlds. 1999. Berlin: Springer Verlag.
[18]
Vertegaal, R., C. Dickie, C. Sohn, and M. Flickner. Designing attentive cell phone using wearable eyecontact sensors. in CHI '02: CHI '02 extended abstracts on Human Factors in Computing Systems. 2002: ACM.
[19]
Mann, S., Wearable computing: toward humanistic intelligence. Intelligent Systems, IEEE {see also IEEE Intelligent Systems and Their Applications}, 2001. 16(3): p. 10--15.
[20]
Aaltonen, A. A context visualization model for wearable computers. in (ISWC 2002). Proceedings. Sixth International Symposium on Wearable Computers. 2002.
[21]
Cheng, L.-T. and J. Robinson, Personal contextual awareness through visual focus. Intelligent Systems, IEEE {see also IEEE Intelligent Systems and Their Applications}, 2001. 16(3): p. 16--20.
[22]
Hyrskykari, A., P. Majaranta, and K.-J. Räihä. From Gaze Control to Attentive Interfaces. in Proc. HCII 2005. 2005. Las Vegas, NV.
[23]
Shell, J., T. Selker, and R. Vertegaal, Interacting with groups of computers. Comm. ACM, 2003. 46(3): p. 40--46.
[24]
Crowley, J., J.l. Coutaz, G. Rey, and P. Reignier, Perceptual Components for Context Aware Computing. Lecture Notes in Computer Science : UbiComp 2002: Ubiquitous Computing : 4th International Conference, Goteborg, Sweden, September 29 -- October 1, 2002. Proceedings. 2002. 117.
[25]
Chou, P., M. Gruteser, J. Lai, A. Levas, et al., BlueSpace: Creating a Personalized and Context-Aware Workspace. 2001, IBM Thomas J. Watson Research Center, : Yorktown Heights, NY 10598. p. 20.
[26]
Roda, C. and J. Thomas, Attention Aware Systems: Theory, Application, and Research Agenda. Computers in Human Behavior Computers in Human Behavior,. 2006: Elsevier.
[27]
Woolfolk, A. and C. Galloway, Nonverbal behavior and the study of teaching. Theory into practice, 1985. 24: p. 77--84.
[28]
Brumitt, B. and J. Cadiz, Let There Be Light: Comparing Interfaces for Homes of the Future. IEEE personal communications, 2000. 28: p. 35.
[29]
Feki, M., S. Renouard, B. Abdulrazak, G. Chollet, et al., Coupling Context Awareness and Multimodality in Smart Homes Concept. Lecture Notes in Computer Science : Computers Helping People with Special Needs. 2004. 906--913.
[30]
Xiong, Y. and F. Quek. Head Tracking with 3D Texture Map Model in Planning Meeting Analysis. in International Workshop on Multimodal Multiparty Meeting Processing (ICMI-MMMP'05). 2005. Trento, Italy.
[31]
Chen, L., T. Rose, F. Perill, X. Han, et al. VACE Multimodal Meeting Corpus. in 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms. 2005. Royal College of Physicians, Edinburgh, UK.
[32]
Quek, F., R.T. Rose, and D. McNeill, Multimodal Meeting Analysis, in International Conference on Intelligence Analysis. 2005.
[33]
Chen, L., M. Harper, A. Franklin, R.T. Rose, et al. A Multimodal Analysis of Floor Control in Meetings. in 3rd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI06). 2006.
[34]
McCowan, I., S. Bengio, D. Gatica--Perez, G. Lathoud, et al. Modeling Human Interaction in Meetings. in ICASSP. 2003. Hong Kong.
[35]
Stiefelhagen, R. and J. Zhu. Head orientation and gaze direction in meetings. in Human Factors in Computing Systems (CHI2002). 2002.
[36]
Yang, J., X. Zhu, R. Gross, J. Kominek, et al., Multimodal People ID for a Multimedia Meeting Browser. ACM multimedia, 1999.
[37]
Grayson, D.M. and A.F. Monk, Are You Looking at Me? Eye Contact and Desktop Video Conferencing. ACM Transactions on Human-Computer Interaction, 2003: p. 221--243.
[38]
Langton, S.R.H., et al Gaze Cues Influence the Allocation of Attention in Natural Scene Viewing. Quarterly Journal of Experimental Psychology, 2006. 00(No. 0): p. 1--9.
[39]
Vertegaal, R., et al Eye Gaze Patterns in Conversations: There is More to Conversational Agents than Meets the Eyes. in ACM Conference on Human Factors in Computing Systems. 2001.
[40]
Duchowski, A.T., A Breadth-First Survey of Eye Tracking Applications. Behavior Research Methods, Instruments, and Computers, 2002. 34( 4): p. 455--470.
[41]
Stiefelhagen, R., et al From Gaze to Focus of Attention. in 3rd International Conference on Visual Information and Visual Information Systems 1999.
[42]
Jacob, R., The Use of Eye Movements In Human-Computer Interaction Techniques: What You Look at is What You Get. ACM Transactions on Information Systems 1991. 9: p. 152--169.
[43]
Sibert, L.E. and R.J.K. Jacob. Evaluation of Eye Gaze Interaction. in ACM Conference on Human Factors in Computing Systems. 2000.
[44]
Nikolov, S.G., et al Gaze-Contingent Display Using Texture Mapping and OpenGL: System and Applications. in The Symposium on Eye Tracking Research and Applications. 2004.
[45]
Parkhurst, D.J. and E. Niebur, Variable Resolution Displays: a Theoretical, Practical, and Behavioral Evaluation. Human Factors, 2002. 44 (4): p. 611--629.
[46]
Qvarfordt, P. and S. Zhai. Conversing with the User Based on Eye-Gaze Patterns. in ACM Conference on Human Factors in Computing Systems. 2005.
[47]
Glenstrup, A.J. and T. Engell-Nielsen, Eye Controlled Media: Present and Future State, in Laboratory of Psychology. 1995, University of Copenhagen.
[48]
Young, L. and D. Sheena, Survey of Eye Movement Recording Methods. Behavior Research Methods and Instrumentation 1975. 7: p. 397--429.
[49]
Manabe, H. and M. Fukumoto. Full-time Wearable Headphone-Type Gaze Detector. in ACM Conference on Human Factors in Computing Systems. 2006.
[50]
Li, D., J. Babcock, and D.J. Parkhurst. OpenEyes: a Low-cost Head-Mounted Eye-Tracking Solution. in ACM Symposium on Eye Tracking Research and Applications, . 2006.
[51]
Cornsweet, T.N. and H.D. Crane, Accurate Two-Dimensional Eye Tracker Using First and Fourth Purkinje Images. J. Optical Society of America, 1973. 63( 8): p. 921--928.
[52]
Ji, Q. and Z. Zhu, Non-intrusive Eye and Gaze Tracking for Natural Human Computer Interaction. MMI Interaktiv, 2003. 6.
[53]
Stiefelhagen, R., J. Yang, and A. Waibel. in Workshop on Perceptual User Interfaces. 1997.
[54]
Newman, R., et al Real-Time Stereo Tracking for Head Pose and Gaze Estimation. in 4th IEEE International Conference on Automatic Face and Gesture Recognition. 2000.
[55]
Wang, J.-G. and E. Sung, Study on Eye Gaze Estimation. IEEE Trans. SMC 2002. 32(No. 3): p. 332--350.
[56]
Gee, A. and R. Cipolla, Determining the Gaze of Faces in Images. Image and Vision Computing, 1994: p. 1--20.
[57]
Stiefelhagen, R. Tracking Focus of Attention in Meetings. in ICMI. 2002. Pittsburg, PA.
[58]
Harris, M.G. and G. Carre, Is optic flow used to guide walking while wearing a displacing prism? Perception, 2001. 30: p. 811--818.
[59]
Rushton, S.K., J.M. Harris, M.R. Lloyd, and J.P. Wann, Guidance of locomotion on foot uses perceived target location rather than optic flow. Current Biology, 1998. 8: p. 1191--1194.
[60]
Warren, W.H., B.A. Kay, W.D. Zosh, A.P. Duchon, et al., Optic flow is used to control human walking. Nature Neuroscience, 2001. 4: p. 213--216.
[61]
Wilkie, R.M. and J.P. Wann, Driving as night falls: The contribution of retinal flow and visual direction to the control of steering. Current Biology, 2002. 12: p. 2014--2017.
[62]
Turano, K.A., D. Yu, L. Hao, and J.C. Hicks, Optic-flow and egocentric-direction strategies in walking: Central vs peripheral visual field. Vision Rsearch, 2005. 45: p. 3117--3132.
[63]
Imai, T., S.T. Moore, T. Raphan, and B. Cohen, Interaction of the body, head, and eyes during walking and turning. Experimental brain research, 2001. 136(1): p. 18.
[64]
Hirasaki, E., S.T. Moore, T. Raphan, and B. Cohen, Effects of walking velocity on vertical head and body movements during locomotion. Experimental Brain Research, 1999. 127(2): p. 117--130.
[65]
Moore, S.T., E. Hirasaki, B. Cohen, and T. Raphan, Effect of viewing distance on the generation of vertical eye movements during locomotion. Experimental Brain Research, 1999. 129(3): p. 347--361.
[66]
Hollands, M.A., A.E. Patla, and J.N. Vickers, "Look where you're going!": gaze behaviour associated with maintaining and changing the direction of locomotion. Experimental Brain Research, 2002. 143(2): p. 221--230.
[67]
Land, M.F., The coordination of rotations of eyes, head and trunk in saccadic turns produced in natural situations. Exp. Brain Res., 2004. 159: p. 151--160.
[68]
Chen, L., R. Travis, F. Parrill, X. Han, et al. VACE Multimodal Meeting Corpus. in Proceedings of MLMI 2005 Workshop. 2005. Edinburgh.
[69]
Quek, F., D. McNeill, and M. Harper, From Video to Information: Cross--Modal Analysis for Planning Meeting Analysis, 18-Month Presentation. 2005.
[70]
McNeill, D., S. Duncan, A. Franklin, J. Goss, et al., MIND-MERGING, in Draft chapter prepared for Robert Krauss Festschrift, to be published by Erlbaum Associates. . 2006.
[71]
Wathugala, D., A Comparison Between Instrumental and Interactive Gaze: Eye deflection and head orientation behavior, in Computer Science. expected: 2006, Virginia Tech: Blacksburg.
[72]
Duda, R.O., P.E. Hart, and D.G. Stork, Multilayer Neural Networks, in Pattern Classificaton, R.O. Duda, P.E. Hart, and D.G. Stork, Editors. 2001, John Wiley and Sons Inc: New York. p. 282--349.
[73]
Kolmogorov, A.N., On the representation of continuous functions of several variables by superposition of continuous functions of one variable and addition. Doklady Akademiia Nauk, SSSR, 1957. 114(5): p. 953--956.
[74]
Kurkova, V., Kolmogorov's theorem is relevant. Neural Computation, 1991. 3(4): p. 617--622.

Cited By

View all

Index Terms

  1. As go the feet...: on the estimation of attentional focus from stance

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '08: Proceedings of the 10th international conference on Multimodal interfaces
    October 2008
    322 pages
    ISBN:9781605581989
    DOI:10.1145/1452392
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 October 2008

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. attention estimation
    2. foot-tracking
    3. human-computer interaction
    4. multimodal interfaces
    5. stance model

    Qualifiers

    • Poster

    Conference

    ICMI '08
    Sponsor:
    ICMI '08: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES
    October 20 - 22, 2008
    Crete, Chania, Greece

    Acceptance Rates

    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 03 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2018)Attention, Please!IEEE Pervasive Computing10.1109/MPRV.2014.313:1(48-54)Online publication date: 21-Dec-2018
    • (2015)The Feet in Human--Computer InteractionACM Computing Surveys10.1145/281645548:2(1-35)Online publication date: 24-Sep-2015
    • (2015)Detecting User Intention at Public Displays from Foot PositionsProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems10.1145/2702123.2702148(3899-3902)Online publication date: 18-Apr-2015
    • (2012)Multimedia-Assisted Breathwalk-Aware SystemIEEE Transactions on Biomedical Engineering10.1109/TBME.2012.220874759:12(3276-3282)Online publication date: Dec-2012

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media