ABSTRACT
One aspect of the EU funded project SAUCE is to explore the possibilities and challenges of integrating light field capturing and processing into media productions. A special light field camera was build by Saarland University [Herfet et al. 2018] and is first tested under production conditions in the test production “Unfolding” as part of the SAUCE project. Filmakademie Baden-Württemberg developed the contentual frame, executed the post-production and prepared a complete previsualization. Calibration and post-processing algorithms are developed by the Trinity College Dublin and the Brno University of Technology. This document describes challenges during building and shooting with the light field camera array, as well as its potential and challenges for the post-production.
- M. Alain, W. Aenchbacher, and A. Smolic. 2019. Interactive light field tilt-shift refocus with generalized shift-and-sum. In Proc. European Light Field Imaging Workshop.Google Scholar
- Y. Chen, M. Alain, and A. Smolic. 2017. Fast and accurate optical flow based depth map estimation from light fields. In Proc. of the Irish Machine Vision and Image Processing Conf. (2017-08-30).Google Scholar
- M. Grogan and R. Dahyot. 2019. L2 Divergence for robust colour transfer. Computer Vision and Image Understanding 181 (2019), 39 – 49.Google ScholarDigital Library
- K. Häming and G. Peters. 2010. The structure-from-motion reconstruction pipeline - a survey with focus on short image sequences. Kybernetika 46(2010).Google Scholar
- R. I. Hartley. 1999. Theory and Practice of Projective Rectification. International Journal of Computer Vision 35, 2 (01 Nov 1999), 115–127.Google ScholarDigital Library
- T. Herfet, T. Lange, and H. Priyadarshini Hariharan. 2018. Enabling Multiview- and Light Field-Video for Veridical Visual Experiences. In Computer and Communications (ICCC), 2018 4th IEEE International Conference on(ICCC ’18). IEEE, Chengdu, China.Google Scholar
- V. Ila, L. Polok, M. Solony, and P. Svoboda. 2017. SLAM++-A highly efficient and temporally scalable incremental SLAM framework. The International Journal of Robotics Research 36, 2 (2017), 210–230.Google ScholarDigital Library
- C. Loop and Z. Zhang. 1999. Computing rectifying homographies for stereo vision. In Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Vol. 1. 125–131 Vol. 1.Google Scholar
- R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan. 2005. Light field photography with a hand-held plenoptic camera. Stanford Technical Report CSTR 2 (2005), 1–11.Google Scholar
- R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz. 2008. Towards 3D Point cloud based object maps for household environments. Robotics and Autonomous Systems 56, 11 (2008), 927 – 941. Semantic Knowledge in Robotics.Google ScholarDigital Library
- N. Sabater, G. Boisson, B. Vandame, P. Kerbiriou, F. Babon, M. Hog, T. Langlois, R. Gendrot, O. Bureller, A. Schubert, and V. Allie. 2017. Dataset and Pipeline for Multi-View Light-Field Video. In CVPR Workshops.Google Scholar
- SAUCE. 2019. Lightfield Data. https://animationsinstitut.de/en/lfGoogle Scholar
- J. Vazquez-Corral and M. Bertalmío. 2014. Color Stabilization Along Time and Across Shots of the Same Scene, for One or Several Cameras of Unknown Specifications. IEEE Transactions on Image Processing (TIP) 23, 10 (Oct 2014), 4564–4575.Google Scholar
- Y. Xu, K. Maeno, H. Nagahara, and R. Taniguchi. 2015. Camera array calibration for light field acquisition. Frontiers of Computer Science 9, 5 (01 Oct 2015), 691–702.Google Scholar
- M. Ziegler, A. Engelhardt, S. Müller, J. Keinert, F. Zilly, S. Foessel, and K. Schmid. 2015. Multi-camera System for Depth Based Visual Effects and Compositing. In Proceedings of the 12th European Conference on Visual Media Production(CVMP ’15). ACM, New York, NY, USA, Article 3, 10 pages.Google Scholar
Index Terms
- The Potential of Light Fields in Media Productions
Recommendations
Multi-perspective stereoscopy from light fields
SA '11: Proceedings of the 2011 SIGGRAPH Asia ConferenceThis paper addresses stereoscopic view generation from a light field. We present a framework that allows for the generation of stereoscopic image pairs with per-pixel control over disparity, based on multi-perspective imaging from light fields. The ...
Multi-perspective stereoscopy from light fields
This paper addresses stereoscopic view generation from a light field. We present a framework that allows for the generation of stereoscopic image pairs with per-pixel control over disparity, based on multi-perspective imaging from light fields. The ...
How do people edit light fields?
We present a thorough study to evaluate different light field editing interfaces, tools and workflows from a user perspective. This is of special relevance given the multidimensional nature of light fields, which may make common image editing tasks ...
Comments