skip to main content
article

Active refocusing of images and videos

Published: 29 July 2007 Publication History

Abstract

We present a system for refocusing images and videos of dynamic scenes using a novel, single-view depth estimation method. Our method for obtaining depth is based on the defocus of a sparse set of dots projected onto the scene. In contrast to other active illumination techniques, the projected pattern of dots can be removed from each captured image and its brightness easily controlled in order to avoid under- or over-exposure. The depths corresponding to the projected dots and a color segmentation of the image are used to compute an approximate depth map of the scene with clean region boundaries. The depth map is used to refocus the acquired image after the dots are removed, simulating realistic depth of field effects. Experiments on a wide variety of scenes, including close-ups and live action, demonstrate the effectiveness of our method.

Supplementary Material

JPG File (pps067.jpg)
MP4 File (pps067.mp4)

References

[1]
Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., Salesin, D., and Cohen, M. 2004. Interactive digital photomontage. In Proc. SIGGRAPH, 294--302.
[2]
Asada, N., Fujiwara, H., and Matsuyama, T. 1998. Edge and depth from focus. Int. J. Comput. Vision 26, 2, 153--163.
[3]
Burt, P. J., and Kolczynski, R. J. 1993. Enhanced image capture through fusion. In Proc. ICCV, 173--182.
[4]
Comaniciu, D., and Meer, P. 2002. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24, 5, 603--619.
[5]
Cook, R. L., Porter, T., and Carpenter, L. 1984. Distributed ray tracing. In Proc. SIGGRAPH, 137--145.
[6]
Figueiredo, M. A., and Jain, A. K. 2002. Unsupervised learning of finite mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 24, 3, 381--396.
[7]
Georgiev, T., Zheng, K. C., Curless, B., Salesin, D., Nayar, S. K., and Intwala, C. 2006. Spatio-angular resolution tradeoff in integral photography. In Proc. Eurographics Symposium on Rendering.
[8]
Girod, B., and Adelson, E. 1990. System for ascertaining direction of blur in a range-from-defocus camera. In US Patent No. 4,965,422.
[9]
Girod, B., and Scherock, S. 1989. Depth from focus of structured light. In Proc. SPIE, vol. 1194, Optics, Illumination, and Image Sensing for Machine Vision.
[10]
Haeberli, P. 1994. A multifocus method for controlling depth of field. Graphica obscura web site. http://www.sgi.com/grafica/.
[11]
Hoiem, D., Efros, A. A., and Hebert, M. 2005. Automatic photo pop-up. In Proc. SIGGRAPH, 577--584.
[12]
Horn, B. 1986. Robot Vision. MIT Press.
[13]
Isaksen, A., McMillan, L., and Gortler, S. J. 2000. Dynamically reparameterized light fields. In Proc. SIGGRAPH, 297--306.
[14]
Krishnan, A., and Ahuja, N. 1996. Panoramic image acquisition. In Proc. CVPR, 379.
[15]
Levoy, M., and Hanrahan, P. 1996. Light field rendering. In Proc. SIGGRAPH, 31--42.
[16]
Levoy, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I., and Bolas, M. 2004. Synthetic aperture confocal imaging. In Proc. SIGGRAPH, 825--834.
[17]
McGuire, M., Matusik, W., Pfister, H., Hughes, J. F., and Durand, F. 2005. Defocus video matting. In Proc. SIGGRAPH, 567--576.
[18]
Nayar, S. K., and Nakagawa, Y. 1994. Shape from focus. IEEE Trans. Pattern Anal. Mach. Intell. 16, 8, 824--831.
[19]
Nayar, S. K., Watanabe, M., and Noguchi, M. 1996. Real-time focus range sensor. IEEE Trans. Pattern Anal. Mach. Intell. 18, 12, 1186--1198.
[20]
Ng, R., Levoy, M., Brdif, M., Duval, G., Horowitz, M., and Hanrahan, P. 2005. Light field photography with a hand-held plenoptic camera. In Tech Report CSTR 2005-02, Computer Science, Stanford University.
[21]
Pentland, A. P. 1987. A new sense for depth of field. IEEE Trans. Pattern Anal. Mach. Intell. 9, 4, 523--531.
[22]
Potmesil, M., and Chakravarty, I. 1981. A lens and aperture camera model for synthetic image generation. In Proc. SIGGRAPH, 297--305.
[23]
Proesmans, M., and Van Gool, L. 1997. One-shot active 3d image capture. In Proceedings SPIE, vol. 3023, Three-Dimensional Image Capture, 50--61.
[24]
Rajagopalan, A. N., and Chaudhuri, S. 1999. An mrf model-based approach to simultaneous recovery of depth and restoration from defocused images. IEEE Trans. Pattern Anal. Mach. Intell. 21, 7, 577--589.
[25]
Rokita, P. 1996. Generating depth-of-field effects in virtual reality applications. IEEE Computer Graphics and Applications 16, 2, 18--21.
[26]
Sakurai, R. 2004. IrisFilter: http://www.reiji.net/.
[27]
Salvi, J., Pagès, J., and Batlle, J. 2004. Pattern codification strategies in structured light systems. Pattern Recognition 37, 4, 827--849.
[28]
Subbarao, M., and Surya, G. 1994. Depth from defocus: A spatial domain approach. Int. J. Comput. Vision 13, 271--294.
[29]
Subbarao, M., Wei, T., and Surya, G. 1995. Focused image recovery from two defocused images recorded with different camera settings. IEEE Trans. Image Processing 4, 12, 1613--1628.
[30]
Wang, J., and Cohen, M. 2005. An iterative optimization approach for unified image segmentation and matting. In Proc. ICCV, 936--943.
[31]
Zhang, L., and Nayar, S. 2006. Projection defocus analysis for scene capture and image display. In Proc. SIGGRAPH, 907--915.

Cited By

View all
  • (2024)The Depth Estimation and Visualization of Dermatological Lesions: Development and Usability StudyJMIR Dermatology10.2196/598397(e59839)Online publication date: 18-Dec-2024
  • (2024)2T-UNET: A Two-Tower UNet with Depth Clues for Robust Stereo Depth Estimation2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW63382.2024.00080(757-764)Online publication date: 17-Jun-2024
  • (2023)Dense Monocular Depth Estimation with Densely Connected Convolutional NetworksProceedings of International Joint Conference on Advances in Computational Intelligence10.1007/978-981-99-1435-7_34(393-405)Online publication date: 16-Jun-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 26, Issue 3
July 2007
976 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/1276377
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 July 2007
Published in TOG Volume 26, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. active illumination
  2. computational photography
  3. depth from defocus
  4. depth of field
  5. image segmentation
  6. refocusing

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)12
  • Downloads (Last 6 weeks)5
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)The Depth Estimation and Visualization of Dermatological Lesions: Development and Usability StudyJMIR Dermatology10.2196/598397(e59839)Online publication date: 18-Dec-2024
  • (2024)2T-UNET: A Two-Tower UNet with Depth Clues for Robust Stereo Depth Estimation2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW63382.2024.00080(757-764)Online publication date: 17-Jun-2024
  • (2023)Dense Monocular Depth Estimation with Densely Connected Convolutional NetworksProceedings of International Joint Conference on Advances in Computational Intelligence10.1007/978-981-99-1435-7_34(393-405)Online publication date: 16-Jun-2023
  • (2022)Monocular Depth Estimation using a Multi-grid Attention-based ModelJournal of Innovative Image Processing10.36548/jiip.2022.3.0014:3(127-146)Online publication date: 12-Aug-2022
  • (2022)Depth Estimation for Egocentric Rehabilitation Monitoring Using Deep Learning AlgorithmsApplied Sciences10.3390/app1213657812:13(6578)Online publication date: 29-Jun-2022
  • (2021)Learning scene and blur model for active chromatic depth from defocusApplied Optics10.1364/AO.43913960:31(9966)Online publication date: 29-Oct-2021
  • (2021)Unified image fusion based on application-adaptive importance measure2015 IEEE International Conference on Image Processing (ICIP)10.1109/ICIP.2015.7350747(1-5)Online publication date: 9-Mar-2021
  • (2021)A Cauchy-Distribution-Based Point Spread Function Model for Depth Recovery from a Single Image2021 International Conference on Computer, Control and Robotics (ICCCR)10.1109/ICCCR49711.2021.9349368(310-314)Online publication date: 8-Jan-2021
  • (2021)AdaBins: Depth Estimation Using Adaptive Bins2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR46437.2021.00400(4008-4017)Online publication date: Jun-2021
  • (2021)Real-Time Approach to Neural Network-Based Disparity Map Generation from Stereo ImagesAdvances in Neural Computation, Machine Learning, and Cognitive Research V10.1007/978-3-030-91581-0_35(261-268)Online publication date: 23-Nov-2021
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media