75 research outputs found
Recommended from our members
Tradeoffs and Limits in Computational Imaging
For centuries, cameras were designed to closely mimic the human visual system. With the rapid increase in computer processing power over the last few decades, researchers in the vision, graphics and optics community have begun to focus their attention on new types of imaging systems that utilize computations as an integral part of the imaging process. Computational cameras optically encode information that is later decoded using signal processing. In this thesis, I show three new computational imaging designs that provide new functionality over conventional cameras. Each design has been rigorously analyzed, built and tested for performance. Each system has demonstrated an increase in functionality over tradition camera designs. The first two computational imaging systems, Diffusion Coding and Spectral Focal Sweep, provide a means to computationally extend the depth of field of an imaging system without sacrificing optical efficiency. These techniques can be used to preserve image detail when photographing scenes that span very large depth ranges. The final example, Gigapixel Computational Imaging, uses a computational approach to overcome limitations in spatial resolution that are caused by geometric aberrations in conventional cameras. While computational techniques can be used to increase optical efficiency, this comes at a cost. The cost incurred is noise amplification caused by the decoding process. Thus, to measure the real utility of a computational approach, we must weigh the benefit of increased optical efficiency against the cost of amplified noise. A complete treatment must take into account an accurate noise model. In some cases, the benefit may not outweigh the cost, and thus a computational approach has no value. This thesis concludes with a discussion on these scenarios
Compressive Holographic Video
Compressed sensing has been discussed separately in spatial and temporal
domains. Compressive holography has been introduced as a method that allows 3D
tomographic reconstruction at different depths from a single 2D image. Coded
exposure is a temporal compressed sensing method for high speed video
acquisition. In this work, we combine compressive holography and coded exposure
techniques and extend the discussion to 4D reconstruction in space and time
from one coded captured image. In our prototype, digital in-line holography was
used for imaging macroscopic, fast moving objects. The pixel-wise temporal
modulation was implemented by a digital micromirror device. In this paper we
demonstrate temporal super resolution with multiple depths recovery
from a single image. Two examples are presented for the purpose of recording
subtle vibrations and tracking small particles within 5 ms.Comment: 12 pages, 6 figure
Intensity interferometry-based 3D imaging
The development of single-photon counting detectors and arrays has made
tremendous steps in recent years, not the least because of various new
applications in, e.g., LIDAR devices. In this work, a 3D imaging device based
on real thermal light intensity interferometry is presented. By using gated
SPAD technology, a basic 3D scene is imaged in reasonable measurement time.
Compared to conventional approaches, the proposed synchronized photon counting
allows using more light modes to enhance 3D ranging performance. Advantages
like robustness to atmospheric scattering or autonomy by exploiting external
light sources can make this ranging approach interesting for future
applications
Accurate Eye Tracking from Dense 3D Surface Reconstructions using Single-Shot Deflectometry
Eye-tracking plays a crucial role in the development of virtual reality
devices, neuroscience research, and psychology. Despite its significance in
numerous applications, achieving an accurate, robust, and fast eye-tracking
solution remains a considerable challenge for current state-of-the-art methods.
While existing reflection-based techniques (e.g., "glint tracking") are
considered the most accurate, their performance is limited by their reliance on
sparse 3D surface data acquired solely from the cornea surface. In this paper,
we rethink the way how specular reflections can be used for eye tracking: We
propose a novel method for accurate and fast evaluation of the gaze direction
that exploits teachings from single-shot phase-measuring-deflectometry (PMD).
In contrast to state-of-the-art reflection-based methods, our method acquires
dense 3D surface information of both cornea and sclera within only one single
camera frame (single-shot). Improvements in acquired reflection surface
points("glints") of factors are easily achievable. We show the
feasibility of our approach with experimentally evaluated gaze errors of only
demonstrating a significant improvement over the current
state-of-the-art
- …