skip to main content
10.1145/2666642.2666645acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Spatio-Temporal Event Selection in Basic Surveillance Tasks using Eye Tracking and EEG

Authors Info & Claims
Published:16 November 2014Publication History

ABSTRACT

In safety- and security-critical applications like video surveillance it is crucial that human operators detect task-relevant events in the continuous video streams and select them for report or dissemination to other authorities. Usually, the selection operation is performed using a manual input device like a mouse or a joystick. Due to the visually rich and dynamic input, the required high attention, the long working time, and the challenging manual selection of moving objects, it occurs that relevant events are missed. To alleviate this problem we propose adding another event selection process, using eye-brain input. Our approach is based on eye tracking and EEG, providing spatio-temporal event selection without any manual intervention. We report ongoing research, building on prior work where we showed the general feasibility of the approach. In this contribution, we extend our work testing the feasibility of the approach using more advanced and less artificial experimental paradigms simulating frequently occurring, basic types of real surveillance tasks. The paradigms are much closer to a real surveillance task in terms of the used visual stimuli, the more subtle cues for event indication, and the required viewing behavior. As a methodology we perform an experiment (N=10) with non-experts. The results confirm the feasibility of the approach for event selection in the advanced tasks. We achieve spatio-temporal event selection accuracy scores of up to 77% and 60% for different stages of event indication.

References

  1. Chang, C.-C., and Lin, C.-J. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2, 3, 27:1--27:27. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Fischer, Y., Krempel, E., Birnstill, P., Unmüßig, G., Monari, E., Moßgraber, J., Schenk, M., and Beyerer, J. 2014 (accepted for publication). Privacy-Aware Smart Video Surveillance Revisited. In Proceedings of the 9th Security Research Conference (Future Security).Google ScholarGoogle Scholar
  3. Heinze, N., Esswein, M., Krüger, W., and Saur, G. 2010. Image exploitation algorithms for reconnaissance and surveillance with UAV. In Proceedings of SPIE Defense, Security, and Sensing, International Society for Optics and Photonics (2010), 76680U--76680U.Google ScholarGoogle Scholar
  4. Hild, J., Brüstle, S., Heinze, N., and Peinsipp-Byma, E. 2013. Gaze interaction in UAS video exploitation. In Proceedings of SPIE Defense, Security, and Sensing, International Society for Optics and Photonics (2013), 87400H--87400H.Google ScholarGoogle Scholar
  5. Hild, J., Müller, E., Klaus, E., Peinsipp-Byma, E., and Beyerer, J. 2013. Evaluating Multi-Modal Eye Gaze Interaction for Moving Object Selection. In Proceedings of the Sixth International Conference on Advances in Computer-Human Interactions. ACHI 2013. IARIA, 454--459.Google ScholarGoogle Scholar
  6. Huang, B. and Lo, A. 2013. Integrating EEG information improves performance of gaze based cursor control. In Proceedings of Neural Engineering. IEEE, 415--418.Google ScholarGoogle Scholar
  7. Irwin, D. 1992. Visual memory within and across fixations. In Eye movements and visual cognition, Springer, 146--165.Google ScholarGoogle Scholar
  8. Jacob, R. J. K. and Karn, K. S. 2003. Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. In Mind 2, 3 (2003), 573--605.Google ScholarGoogle Scholar
  9. Just, M. A. and Carpenter, P. A. 1976. Eye fixations and cognitive processes. Cognitive psychology 8, 4 (1976), 441--480.Google ScholarGoogle Scholar
  10. Krusienski, D. J., Sellers, E. W., McFarland, D. J., Vaughan, T. M., and Wolpaw, J. R. 2008. Toward Enhanced P300 Speller Performance. Journal of Neuroscience Methods 167, 1 (2008), 15--21.Google ScholarGoogle ScholarCross RefCross Ref
  11. Putze, F., Hild, J., Kärgel, R., Herff, Ch. Redmann, A., Beyerer, J. and Schulz, T. 2013. Locating User Attention using Eye tracking and EEG for Spatio-temporal Event Selection. In Proceedings of the 2013 international conference on intelligent user interfaces. IUI 2013. ACM, 129--136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Schlögl, A., Keinrath, C., Zimmermann, D., Scherer, R., Leeb, R., and Pfurtscheller, G. A. 2007. Fully Automated Correction Method of EOG Artifacts in EEG Recordings. Clinical Neurophysiology 118, 1 (Jan. 2007), 98--204.Google ScholarGoogle ScholarCross RefCross Ref
  13. Salvucci, D. D. and Goldberg, J. H. Identifying fixations and saccades in eye-tracking protocols. 2000. In Proceedings of the 2000 symposium on Eye tracking research & applications. ETRA 2000. ACM, 71--78. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. von Bünau, P., Meinecke, F., Kiraly, F., and Müller, K.-R. 2009. Finding stationary subspaces in multivariate time series. Physical Review Letters 103, 21. APS.Google ScholarGoogle ScholarCross RefCross Ref
  15. Teichner, W. H. 1974. The detection of a simple visual signal as a function of time on watch. Human Factors 16, 4, 339--352, Sage Publications.Google ScholarGoogle ScholarCross RefCross Ref
  16. Ware, C., and Mikaelian, H. 1987. An evaluation of an eye tracker as a device for computer input. ACM SIGCHI Bulletin 17, SI, ACM, 183--188. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Yong, X., Fatourechi, M., Ward, R. K., and Birch, G. E. 2011. The Design of a Point-and-Click System by Integrating a Self-Paced Brain Computer Interface with an Eye-Tracker. IEEE Journal on Emerging and Selected Topics in Circuits and Systems 1, 4 (December 2011), 590--602.Google ScholarGoogle ScholarCross RefCross Ref
  18. Zander, T. O., Gaertner, M., Kothe, C., and Vilimek, R. 2010. Combining Eye Gaze Input with a Brain-Computer Interface for Touchless Human-Computer Interaction. International Journal of Human-Computer Interaction 27, 1 (2010), 38--51.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Spatio-Temporal Event Selection in Basic Surveillance Tasks using Eye Tracking and EEG

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      GazeIn '14: Proceedings of the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze & Multimodality
      November 2014
      50 pages
      ISBN:9781450301251
      DOI:10.1145/2666642

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 16 November 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      GazeIn '14 Paper Acceptance Rate8of8submissions,100%Overall Acceptance Rate19of21submissions,90%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader