skip to main content
10.5555/1839214.1839248guideproceedingsArticle/Chapter ViewAbstractPublication PagesgiConference Proceedingsconference-collections
research-article
Free access

Techniques for view transition in multi-camera outdoor environments

Published: 31 May 2010 Publication History

Abstract

Environment monitoring using multiple observation cameras is increasingly popular. Different techniques exist to visualize the incoming video streams, but only few evaluations are available to find the best suitable one for a given task and context. This article compares three techniques for browsing video feeds from cameras that are located around the user in an unstructured manner. The techniques allow mobile users to gain extra information about the surroundings, the objects and the actors in the environment by observing a site from different perspectives. The techniques relate local and remote cameras topologically, via a tunnel, or via bird's eye viewpoint. Their common goal is to enhance spatial awareness of the viewer, without relying on a model or previous knowledge of the environment. We introduce several factors of spatial awareness inherent to multi-camera systems, and present a comparative evaluation of the proposed techniques with respect to spatial understanding and workload.

References

[1]
B. Avery. X-Ray Vision for Mobile Outdoor Augmented Reality. Dissertation, School of Computer and Information Science, University of South Australia, 2009.
[2]
F. Biocca, A. Tang, C. Owen, and F. Xiao. Attention funnel: omnidirectional 3D cursor for mobile augmented reality platforms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22--27, 2006). CHI 2006. ACM.
[3]
D. A. Bowman, D. Koller, and L. F. Hodges. A methodology for the evaluation of travel techniques for immersive virtual environments. Virtual Reality: Research, Development, and Applications, 3(2):120--131, 1998.
[4]
D. A. Bowman, E. Kruijff, J. J. LaViola, and I. Poupyrev. 3D User Interfaces: Theory and Practice. Addison Wesley Longman Publishing Co., Inc., Redwood City, CA, USA, 2004.
[5]
G. de Haan, J. Scheuer, R. de Vries, and F. H. Post. Egocentric navigation for video surveillance in 3d virtual environments. In 3D User Interfaces, pages 111--118, Los Alamitos, CA, USA, 2009.
[6]
R. D. Easton and M. J. Sholl. Object-array structure, frames of reference, and retrieval of spatial knowledge. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(2):483--500, 1995.
[7]
N. Elmqvist and P. Tsigas. A taxonomy of 3d occlusion management for visualization. IEEE Transactions on Visualization and Computer Graphics, 14:1095--1109, 2008.
[8]
M. R. Endsley. Situation awareness global assessment technique (sagat). In National Aerospace and Electronic Conference (NAECON), Dayton, OH, USA, 1988.
[9]
M. R. Endsley. Toward a theory of situation awareness in dynamic systems: Situation awareness. Human factors, 37(1):32--64, 1995.
[10]
A. Girgensohn, D. Kimber, J. Vaughan, T. Yang, F. Shipman, T. Turner, E. Rieffel, L. Wilcox, F. Chen, and T. Dunnigan. Dots: support for effective video surveillance. In MULTIMEDIA '07: Proceedings of the 15th international conference on Multimedia, pages 423--432, New York, NY, USA, 2007. ACM.
[11]
R. Grasset, J. Looser, and M. Billinghurst. Transitional interface: concept, issues and framework. In ISMAR '06: Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality, pages 231--232, Washington, DC, USA, 2006. IEEE Computer Society.
[12]
S. Hart and L. Stavenland, "Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research," in Human Mental Workload, P. A. Hancock and N. Meshkati, Eds. Elsevier, 1988, ch. 7, pp. 139--183.
[13]
M. Hegarty, A. Richardson, D. Montello, K. Lovelace, & I. Subbiah. (2002). Development of a Self-Report Measure of Environmental Spatial Ability. Intelligence, 30, 425--447.
[14]
Lawton, C. A. (1994). Gender differences in way-finding strategies: Relationship to spatial ability and spatial anxiety. Sex Roles 30(11/12), 765--779.
[15]
H. S. Sawhney, A. Arpa, R. Kumar, S. Samarasekera, M. Aggarwal, S. Hsu, D. Nister, and K. Hanna. Video flashlights: real time rendering of multiple videos for immersive model visualization. In Proceedings of the 13th Eurographics Workshop on Rendering (Pisa, Italy, June 26--28, 2002), pages 157--168, vol. 28. Eurographics Association, Aire-la-Ville, Switzerland, 2002. ACM.
[16]
I. O. Sebe, J. Hu, S. You, and U. Neumann. 3d video surveillance with augmented virtual environments. In IWVS '03: First ACM SIGMM international workshop on Video surveillance, pages 107--112, New York, NY, USA, 2003. ACM.
[17]
R. N. Shepard and J. Metzler. Mental rotation of three-dimensional objects. Science, 171(972):701--703, February 1971.
[18]
P. S. Tsang and M. A. Vidulich. Mental workload and situation awareness, chapter Handbook of human factors and ergonomics. John Wiley & Sons, 2006. ed. G. Salvendy.
[19]
T. Tsuda, H. Yamamoto, Y. Kameda, and Y. Ohta. Visualization methods for outdoor see-through vision. In ICAT '05: Proceedings of the 2005 international conference on Augmented tele-existence, pages 62--69, New York, NY, USA, 2005. ACM.
[20]
B. Tversky. Cognitive mapping: past, present, and future, chapter Levels and Structure of Spatial Knowledge. Routledge Frontiers of Cognitive Science No.4. Routledge, 2000. ed. R. Kitchin and S. Freundschuh.
[21]
Y. Wang, D. M. Krum, E. M. Coelho, D. A. Bowman, Contextualized Videos: Combining Videos with Environment Models to Support Situational Understanding. In IEEE Transactions on Visualization and Computer Graphics, vol. 13, no. 6, pp. 1568--1575, November/December, 2007.
[22]
J. M. Zacks, J. Mires, B. Tversky, and E. Hazeltine. Mental spatial transformations of objects and perspective. Spatial Cognition and Computation, 2(4):315--332, 2001.
[23]
Zijlstra, F. R. H., (1993). Efficiency in Work Behaviour; A Design Approach for Modern Tools. Delft: Delft University Press.

Cited By

View all
  • (2018)Towards Resource-Aware Hybrid Camera SystemsProceedings of the 12th International Conference on Distributed Smart Cameras10.1145/3243394.3243701(1-7)Online publication date: 3-Sep-2018
  • (2018)Immersive Trip ReportsProceedings of the 31st Annual ACM Symposium on User Interface Software and Technology10.1145/3242587.3242653(389-401)Online publication date: 11-Oct-2018
  • (2018)Drone-Augmented Human VisionIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2018.279405824:4(1437-1446)Online publication date: 1-Apr-2018
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
GI '10: Proceedings of Graphics Interface 2010
May 2010
291 pages
ISBN:9781568817125
  • Program Chairs:
  • David Mould,
  • Sylvie Noël

Publisher

Canadian Information Processing Society

Canada

Publication History

Published: 31 May 2010

Author Tags

  1. human factors
  2. multi-camera systems
  3. navigation techniques
  4. situation awareness
  5. workload

Qualifiers

  • Research-article

Acceptance Rates

GI '10 Paper Acceptance Rate 35 of 88 submissions, 40%;
Overall Acceptance Rate 206 of 508 submissions, 41%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)61
  • Downloads (Last 6 weeks)10
Reflects downloads up to 13 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2018)Towards Resource-Aware Hybrid Camera SystemsProceedings of the 12th International Conference on Distributed Smart Cameras10.1145/3243394.3243701(1-7)Online publication date: 3-Sep-2018
  • (2018)Immersive Trip ReportsProceedings of the 31st Annual ACM Symposium on User Interface Software and Technology10.1145/3242587.3242653(389-401)Online publication date: 11-Oct-2018
  • (2018)Drone-Augmented Human VisionIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2018.279405824:4(1437-1446)Online publication date: 1-Apr-2018
  • (2017)On the precision of third person perspective augmented reality for target designation tasksMultimedia Tools and Applications10.1007/s11042-016-3817-076:14(15279-15296)Online publication date: 1-Jul-2017
  • (2015)ModulAR: Eye-Controlled Vision Augmentations for Head Mounted DisplaysIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2015.245985221:11(1259-1268)Online publication date: 1-Nov-2015
  • (2013)Exploring distant objects with augmented realityProceedings of the 5th Joint Virtual Reality Conference10.5555/2600262.2600272(49-56)Online publication date: 11-Dec-2013
  • (2012)Exploring the design of hybrid interfaces for augmented posters in public spacesProceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design10.1145/2399016.2399053(238-246)Online publication date: 14-Oct-2012
  • (2012)VideoscapesACM Transactions on Graphics10.1145/2185520.218556431:4(1-12)Online publication date: 1-Jul-2012
  • (2011)HYDROSYSProceedings of the 10th international conference on Web and wireless geographical information systems10.5555/1966271.1966289(159-175)Online publication date: 3-Mar-2011
  • (2011)Directing attention and influencing memory with visual saliency modulationProceedings of the SIGCHI Conference on Human Factors in Computing Systems10.1145/1978942.1979158(1471-1480)Online publication date: 7-May-2011

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media