skip to main content
article

Image and depth from a conventional camera with a coded aperture

Published: 29 July 2007 Publication History

Abstract

A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image.
Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.

Supplementary Material

JPG File (pps070.jpg)
MP4 File (pps070.mp4)

References

[1]
Adelson, E. H., and Wang, J. Y. A. 1992. Single lens stereo with a plenoptic camera. IEEE Trans. Pattern Anal. Mach. Intell. 14, 2, 99--106.
[2]
Axelsson, P. 1999. Processing of laser scanner data--algorithms and applications. ISPRS Journal of Photogrammetry and Remote Sensing 54, 138--147.
[3]
Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., and der Vorst, H. V. 1994. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition. SIAM, Philadelphia, PA.
[4]
Boykov, Y., Veksler, O., and Zabih, R. 2001. Fast approximate energy minimization via graph cuts. PAMI 23 (Nov), 1222--1239.
[5]
Cathey, W., and Dowski, R. 1995. A new paradigm for imaging systems. Applied Optics 41, 1859--1866.
[6]
Chaudhuri, S., and Rajagopalan, A. 1999. Depth from defocus: A real aperture imaging approach. Springer-Verlag, New York.
[7]
Dowski, E. R., and Cathey, W. T. 1994. Single-lens single-image incoherent passive-ranging systems. Applied Optics 33, 6762--6773.
[8]
Farid, H., and Simoncelli, E. P. 1998. Range estimation by optical differentiation. Journal of the Optical Society of America 15, 1777--1786.
[9]
Favaro, P., Mennucci, A., and Soatto, S. 2003. Observing shape from defocused images. Int. J. Comput. Vision 52, 1, 25--43.
[10]
Fenimore, E., and Cannon, T. 1978. Coded aperture imaging with uniformly redundant rays. Applied Optics 17, 337--347.
[11]
Fergus, R., Singh, B., Hertzmann, A., Roweis, S. T., and Freeman, W. 2006. Removing camera shake from a single photograph. ACM Transactions on Graphics, SIGGRAPH 2006 Conference Proceedings, Boston, MA 25, 787--794.
[12]
Georgiev, T., Zheng, K. C., Curless, B., Salesin, D., Nayar, S., and Intwala, C. 2006. Spatio-angular resolution tradeoffs in integral photography. In Rendering Techniques 2006: 17th Eurographics Workshop on Rendering, 263--272.
[13]
Greengard, A., Schechner, Y., and Piestun, R. 2006. Depth from diffracted rotation. Optics Letters 31, 181--183.
[14]
Grossmann, P. 1987. Depth from focus. Pattern Recognition Letters 5, 1 (Jan.), 63--69.
[15]
Hasinoff, S. W., and Kutulakos, K. N. 2006. Confocal stereo. In European Conference on Computer Vision, I: 620--634.
[16]
Hiura, S., and Matsuyama, T. 1998. Depth measurement by the multi-focus camera. In CVPR, IEEE Computer Society, 953--961.
[17]
Jones, D., and Lamb, D., 1993. Analyzing the visual echo: passive 3-D imaging with a multiple aperture camera. Technical Report CIM 93-3, Dept. of Electrical Engineering, McGill University.
[18]
Kundur, D., and Hatzinakos, D. 1996. Blind image deconvolution. IEEE Signal Processing Magazine 13, 3 (May), 43--64.
[19]
Lai, S.-H., Fu, C.-W., and Chang, S. 1992. A generalized depth estimation algorithm with a single image. IEEE Trans. Pattern Anal. Mach. Intell. 14, 4, 405--411.
[20]
Levin, A., and Weiss, Y. To appear. User assisted separation of reflections from a single image using a sparsity prior. IEEE Transactions on Pattern Analysis and Machine Intelligence.
[21]
Levin, A. 2006. Blind motion deblurring using image statistics. In Advances in Neural Information Processing Systems (NIPS).
[22]
Levoy, M., Ng, R., Adams, A., Footer, M., and Horowitz, M. 2006. Light field microscopy. ACM Transactions on Graphics 25, 3 (July), 924--934.
[23]
Nayar, S. K., Watanabe, M., and Noguchi, M. 1995. Real-time focus range sensor. In ICCV, 995--1001.
[24]
Ng, R., Levoy, M., Bredif, M., Duval, G., Horowitz, M., and Hanrahan, P. 2005. Light field photography with a handheld plenoptic camera. Stanford University Computer Science Tech Report CSTR 2005-02.
[25]
Olshausen, B. A., and Field, D. J. 1996. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381 (June), 607--609.
[26]
Pentland, A. P. 1987. A new sense for depth of field. IEEE Trans. Pattern Anal. Mach. Intell. 9, 4, 523--531.
[27]
Premaratne, P., and Ko, C. C. 1999. Zero sheet separation of blurred images with symmetrical point spread functions. Signals, Systems, and Computers, 1297--1299.
[28]
Raskar, R., Agrawal, A., and Tubmlin, J. 2006. Coded exposure photography: Motion deblurring using fluttered shutter. ACM Transactions on Graphics, SIGGRAPH 2006 Conference Proceedings, Boston, MA 25, 795--804.
[29]
Scharstein, D., and Szeliski, R. 2002. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Intl. J. Computer Vision 47, 1 (April), 7--42.
[30]
Zhang, L., and Nayar, S. K. 2006. Projection defocus analysis for scene capture and image display. ACM Trans. on Graphics (also Proc. of ACM SIGGRAPH) (Jul).

Cited By

View all
  • (2025)Point spread function modeling and engineering in black-box lens systemsOptics Express10.1364/OE.54676733:3(4211)Online publication date: 28-Jan-2025
  • (2025)A Unified Framework for Event-Based Frame Interpolation With Ad-Hoc Deblurring in the WildIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.351069047:4(2265-2279)Online publication date: Apr-2025
  • (2025)Lightweight sparse optoacoustic image reconstruction via an attention-driven multi-scale wavelet networkPhotoacoustics10.1016/j.pacs.2025.10069542(100695)Online publication date: Apr-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 26, Issue 3
July 2007
976 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/1276377
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 July 2007
Published in TOG Volume 26, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. coded imaging
  2. computational photography
  3. deblurring
  4. depth of field
  5. image statistics
  6. range estimation

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)323
  • Downloads (Last 6 weeks)29
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Point spread function modeling and engineering in black-box lens systemsOptics Express10.1364/OE.54676733:3(4211)Online publication date: 28-Jan-2025
  • (2025)A Unified Framework for Event-Based Frame Interpolation With Ad-Hoc Deblurring in the WildIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.351069047:4(2265-2279)Online publication date: Apr-2025
  • (2025)Lightweight sparse optoacoustic image reconstruction via an attention-driven multi-scale wavelet networkPhotoacoustics10.1016/j.pacs.2025.10069542(100695)Online publication date: Apr-2025
  • (2025)Noise variances and regularization learning gradient descent network for image deconvolutionJournal of Visual Communication and Image Representation10.1016/j.jvcir.2025.104391107(104391)Online publication date: Mar-2025
  • (2025)DefocusSR2: An efficient depth-guided and distillation-based framework for defocus images super-resolutionDisplays10.1016/j.displa.2024.10288386(102883)Online publication date: Jan-2025
  • (2025)Guided image filtering-conventional to deep models: A review and evaluation studyComputer Vision and Image Understanding10.1016/j.cviu.2025.104278252(104278)Online publication date: Feb-2025
  • (2025)Development of a plenoptic imaging system for turbulent atmospheric applicationsJournal of Optics10.1007/s12596-024-02408-6Online publication date: 4-Jan-2025
  • (2025)Continual Test-Time Adaptation for Single Image Defocus Deblurring via Causal Siamese NetworksInternational Journal of Computer Vision10.1007/s11263-025-02363-0Online publication date: 22-Feb-2025
  • (2024)Auto Encoder Driven Hybrid Pipelines for Image Deblurring using NAFNETInternational Journal of Innovative Science and Research Technology (IJISRT)10.38124/ijisrt/IJISRT24APR932(833-838)Online publication date: 25-Apr-2024
  • (2024)光场表征及其分辨率提升技术:文献综述及最新进展(特邀)Infrared and Laser Engineering10.3788/IRLA2024034753:9(20240347)Online publication date: 2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media