skip to main content
10.1145/1452392.1452414acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

The CAVA corpus: synchronised stereoscopic and binaural datasets with head movements

Published: 20 October 2008 Publication History

Abstract

This paper describes the acquisition and content of a new multi-modal database. Some tools for making use of the data streams are also presented. The Computational Audio-Visual Analysis (CAVA) database is a unique collection of three synchronised data streams obtained from a binaural microphone pair, a stereoscopic camera pair and a head tracking device. All recordings are made from the perspective of a person; i.e. what would a human with natural head movements see and hear in a given environment. The database is intended to facilitate research into humans' ability to optimise their multi-modal sensory input and fills a gap by providing data that enables human centred audio-visual scene analysis. It also enables 3D localisation using either audio, visual, or audio-visual cues. A total of 50 sessions, with varying degrees of visual and auditory complexity, were recorded. These range from seeing and hearing a single speaker moving in and out of field of view, to moving around a 'cocktail party' style situation, mingling and joining different small groups of people chatting.

References

[1]
Ami project. http://www.amiproject.org/.
[2]
CAVA database. http://perception.inrialpes.fr/CAVA_Dataset/.
[3]
CLEAR evaluation and workshop. http://www.clear-evaluation.org/.
[4]
CYCLOPE tracker. http://www.inrialpes.fr/sed/6doftracker/.
[5]
GRID audiovisual sentence corpus. http://www.dcs.shef.ac.uk/spandh/gridcorpus/.
[6]
M2vts database. http://www.tele.ucl.ac.be/PROJECTS/M2VTS/.
[7]
OpenCV library. http://opencvlibrary.sourceforge.net.
[8]
P. Aschwanden and W. Guggenbül. Experimental results from a comparative study on correlation type registration algorithms. In Förstner and Ruwiedel, editors, Robust computer vision: Quality of Vision Algorithms, pages 268--282. Wichmann, 1992.
[9]
E. Bailly-Bailliére, S. Bengio, F. Bimbot, M. Hamouz, J. Kittler, J. Mariéthoz, J. Matas, K. Messer, V. Popovici, F. Porée, B. Ruiz, and J.-P. Thiran. The banca database and evaluation protocol. In AVBPA, 2003.
[10]
M. P. Cooke, Y.-C. Lu, Y. Lu, and R. Horaud. Active hearing, active speaking. In Int. Symp. Audiological and Auditory Research, Helsingor, Denmark, 2007.
[11]
C. Harris and M. Stephens. A combined corner and edge detector. In 4th Alvey Conference, pages 147--152, August 1988.
[12]
R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition, 2004.
[13]
T. Hazen, E. Saenko, C. La, and J. Glass. A segment-based audio-visual speech recognizer: Data collection, development and initial experiments. In ICMI, 2004.
[14]
V. Khalidov, F. Forbes, M. Hansard, E. Arnaud, and R. Horaud. Detection ad localization of 3d audio-visual objects using unsupervised clustering. In ICMI, 2008.
[15]
G. Lathoud, J.-M. Odobez, and D. Gatica-Perez. Av16.3: an audio-visual corpus for speaker localization and tracking. In MLMI, 2004.
[16]
K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre. Xm2vtsdb: The extended m2vts database. In AVBPA, 1999.
[17]
E. Patterson, S. Gurbuz, Z. Tufekci, and J. Gowdy. Cuave: A new audio-visual database for multimodal human-computer interface research. In ICASSP, 2002.

Cited By

View all
  • (2022)Tragic Talkers: A Shakespearean Sound- and Light-Field Dataset for Audio-Visual Machine Learning ResearchProceedings of the 19th ACM SIGGRAPH European Conference on Visual Media Production10.1145/3565516.3565522(1-8)Online publication date: 1-Dec-2022
  • (2022)Bayesian hierarchical mixture models for detecting non‐normal clusters applied to noisy genomic and environmental datasetsAustralian & New Zealand Journal of Statistics10.1111/anzs.1237064:2(313-337)Online publication date: Aug-2022
  • (2019)Multi-Speaker Tracking From an Audio–Visual Sensing DeviceIEEE Transactions on Multimedia10.1109/TMM.2019.290248921:10(2576-2588)Online publication date: Oct-2019
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '08: Proceedings of the 10th international conference on Multimodal interfaces
October 2008
322 pages
ISBN:9781605581989
DOI:10.1145/1452392
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 October 2008

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. binaural hearing
  2. database
  3. stereo vision

Qualifiers

  • Poster

Conference

ICMI '08
Sponsor:
ICMI '08: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES
October 20 - 22, 2008
Crete, Chania, Greece

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2022)Tragic Talkers: A Shakespearean Sound- and Light-Field Dataset for Audio-Visual Machine Learning ResearchProceedings of the 19th ACM SIGGRAPH European Conference on Visual Media Production10.1145/3565516.3565522(1-8)Online publication date: 1-Dec-2022
  • (2022)Bayesian hierarchical mixture models for detecting non‐normal clusters applied to noisy genomic and environmental datasetsAustralian & New Zealand Journal of Statistics10.1111/anzs.1237064:2(313-337)Online publication date: Aug-2022
  • (2019)Multi-Speaker Tracking From an Audio–Visual Sensing DeviceIEEE Transactions on Multimedia10.1109/TMM.2019.290248921:10(2576-2588)Online publication date: Oct-2019
  • (2019)Accurate Target Annotation in 3D from Multimodal StreamsICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP.2019.8682619(3931-3935)Online publication date: May-2019
  • (2018)Acoustic interactions for robot audition: A corpus of real auditory scenesThe Journal of the Acoustical Society of America10.1121/1.5078769144:5(EL399-EL403)Online publication date: Nov-2018
  • (2014)Vision-guided robot hearingThe International Journal of Robotics Research10.1177/027836491454805034:4-5(437-456)Online publication date: 27-Oct-2014
  • (2013)The vernissage corpusProceedings of the 8th ACM/IEEE international conference on Human-robot interaction10.5555/2447556.2447611(149-150)Online publication date: 3-Mar-2013
  • (2013)Binaural Systems in RoboticsThe Technology of Binaural Listening10.1007/978-3-642-37762-4_9(225-253)Online publication date: 2013
  • (2012)RAVEL: an annotated corpus for training robots with audiovisual abilitiesJournal on Multimodal User Interfaces10.1007/s12193-012-0111-y7:1-2(79-91)Online publication date: 7-Sep-2012
  • (2011)Conjugate mixture models for clustering multimodal dataNeural Computation10.1162/NECO_a_0007423:2(517-557)Online publication date: 1-Feb-2011
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media