225 research outputs found

    One-shot 3d surface reconstruction from instantaneous frequencies: solutions to ambiguity problems

    Get PDF
    Phase-measuring profilometry is a well known technique for 3D surface reconstruction based on a sinusoidal pattern that is projected on a scene. If the surface is partly occluded by, for instance, other objects, then the depth shows abrupt transitions at the edges of these occlusions. This causes ambiguities in the phase and, consequently, also in the reconstruction.\ud This paper introduces a reconstruction method that is based on the instantaneous frequency instead of phase. Using these instantaneous frequencies we present a method to recover from ambiguities caused by occlusion. The recovery works under the condition that some surface patches can be found that are planar. This ability is demonstrated in a simple example. \u

    Understanding and ameliorating non-linear phase and amplitude responses in AMCW Lidar

    Get PDF
    Amplitude modulated continuous wave (AMCW) lidar systems commonly suffer from non-linear phase and amplitude responses due to a number of known factors such as aliasing and multipath inteference. In order to produce useful range and intensity information it is necessary to remove these perturbations from the measurements. We review the known causes of non-linearity, namely aliasing, temporal variation in correlation waveform shape and mixed pixels/multipath inteference. We also introduce other sources of non-linearity, including crosstalk, modulation waveform envelope decay and non-circularly symmetric noise statistics, that have been ignored in the literature. An experimental study is conducted to evaluate techniques for mitigation of non-linearity, and it is found that harmonic cancellation provides a significant improvement in phase and amplitude linearity

    Real Time Structured Light and Applications

    Get PDF

    Single-pixel, single-photon three-dimensional imaging

    Get PDF
    The 3D recovery of a scene is a crucial task with many real-life applications such as self-driving vehicles, X-ray tomography and virtual reality. The recent development of time-resolving detectors sensible to single photons allowed the recovery of the 3D information at high frame rate with unprecedented capabilities. Combined with a timing system, single-photon sensitive detectors allow the 3D image recovery by measuring the Time-of-Flight (ToF) of the photons scattered back by the scene with a millimetre depth resolution. Current ToF 3D imaging techniques rely on scanning detection systems or multi-pixel sensor. Here, we discuss an approach to simplify the hardware complexity of the current 3D imaging ToF techniques using a single-pixel, single-photon sensitive detector and computational imaging algorithms. The 3D imaging approaches discussed in this thesis do not require mechanical moving parts as in standard Lidar systems. The single-pixel detector allows to reduce the pixel complexity to a single unit and offers several advantages in terms of size, flexibility, wavelength range and cost. The experimental results demonstrate the 3D image recovery of hidden scenes with a subsecond acquisition time, allowing also non-line-of-sight scenes 3D recovery in real-time. We also introduce the concept of intelligent Lidar, a 3D imaging paradigm based uniquely on the temporal trace of the return photons and a data-driven 3D retrieval algorithm

    Phasor Imaging: A Generalization of Correlation-Based Time-of-Flight Imaging

    Get PDF
    In correlation-based time-of-flight (C-ToF) imaging systems, light sources with temporally varying intensities illuminate the scene. Due to global illumination, the temporally varying radiance received at the sensor is a combination of light received along multiple paths. Recovering scene properties (e.g., scene depths) from the received radiance requires separating these contributions, which is challenging due to the complexity of global illumination and the additional temporal dimension of the radiance. We propose phasor imaging, a framework for performing fast inverse light transport analysis using C-ToF sensors. Phasor imaging is based on the idea that by representing light transport quantities as phasors and light transport events as phasor transformations, light transport analysis can be simplified in the temporal frequency domain. We study the effect of temporal illumination frequencies on light transport, and show that for a broad range of scenes, global radiance (multi-path interference) vanishes for frequencies higher than a scene-dependent threshold. We use this observation for developing two novel scene recovery techniques. First, we present Micro ToF imaging, a ToF based shape recovery technique that is robust to errors due to global illumination. Second, we present a technique for separating the direct and global components of radiance. Both techniques require capturing as few as 3−4 images and minimal computations. We demonstrate the validity of the presented techniques via simulations and experiments performed with our hardware prototype

    Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles

    Get PDF
    Time of flight cameras produce real-time range maps at a relatively low cost using continuous wave amplitude modulation and demodulation. However, they are geared to measure range (or phase) for a single reflected bounce of light and suffer from systematic errors due to multipath interference. We re-purpose the conventional time of flight device for a new goal: to recover per-pixel sparse time profiles expressed as a sequence of impulses. With this modification, we show that we can not only address multipath interference but also enable new applications such as recovering depth of near-transparent surfaces, looking through diffusers and creating time-profile movies of sweeping light. Our key idea is to formulate the forward amplitude modulated light propagation as a convolution with custom codes, record samples by introducing a simple sequence of electronic time delays, and perform sparse deconvolution to recover sequences of Diracs that correspond to multipath returns. Applications to computer vision include ranging of near-transparent objects and subsurface imaging through diffusers. Our low cost prototype may lead to new insights regarding forward and inverse problems in light transport.United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award)Alfred P. Sloan Foundation (Fellowship)Massachusetts Institute of Technology. Media Laboratory. Camera Culture Grou

    The human visual system's representation of light sources and the objects they illuminate

    Full text link
    The light sources in a scene can drastically affect the pattern of intensities falling on the retina. However, it is unclear how the visual system represents the light sources in a scene. One possibility is that a light source is treated as a scene component: an entity that exists within a scene and interacts with other scene components (object shape and object reflectance) to produce the retinal image. The aim of this thesis was to test two key predictions arising from a perceptual framework in which light sources and the objects they illuminate are considered to be scene components by the visual system. We begin examining the first prediction in Chapter 3, focusing on the role of a dynamic shape cue in the interaction between shape, reflectance, and lighting. In two psychophysics experiments, we show that the visual system can "explain away'" alternative interpretations of luminance gradients using the information provided by a dynamic shape cue (kinetic depth). In subsequent chapters, the research focus shifts to the second prediction, investigating whether multiple objects in a scene are integrated to estimate light source direction. In Chapter 4, participants were presented with scenes that contained 1, 9, and 25 objects and asked to judge whether the scenes were illuminated from the left or right, relative to their viewpoint. We found that increasing the number of objects in a scene worsened, if anything, discrimination sensitivity. To further understand this result, we conducted an equivalent noise experiment in Chapter 5 to examine the contributions of internal noise and integration to estimates of light source direction. Our results indicate that participants used only 1 or 2 objects to judge light source direction for scenes with 9 and 25 objects. Chapter 6 presents a shape discrimination experiment that required participants to make an implicit, rather than explicit, judgement of light source direction. Consistent with the results reported in Chapters 4 and 5, we find that shape discrimination sensitivity was comparable for scenes containing 1, 9, and 25 objects. Taken together, the findings presented here suggest that while object shape and reflectance may be represented as scene components, lighting seems to be associated with individual objects rather than having a scene-level representation

    Development of a Full-Field Time-of-Flight Range Imaging System

    Get PDF
    A full-field, time-of-flight, image ranging system or 3D camera has been developed from a proof-of-principle to a working prototype stage, capable of determining the intensity and range for every pixel in a scene. The system can be adapted to the requirements of various applications, producing high precision range measurements with sub-millimetre resolution, or high speed measurements at video frame rates. Parallel data acquisition at each pixel provides high spatial resolution independent of the operating speed. The range imaging system uses a heterodyne technique to indirectly measure time of flight. Laser diodes with highly diverging beams are intensity modulated at radio frequencies and used to illuminate the scene. Reflected light is focused on to an image intensifier used as a high speed optical shutter, which is modulated at a slightly different frequency to that of the laser source. The output from the shutter is a low frequency beat signal, which is sampled by a digital video camera. Optical propagation delay is encoded into the phase of the beat signal, hence from a captured time variant intensity sequence, the beat signal phase can be measured to determine range for every pixel in the scene. A direct digital synthesiser (DDS) is designed and constructed, capable of generating up to three outputs at frequencies beyond 100 MHz with the relative frequency stability in excess of nine orders of magnitude required to control the laser and shutter modulation. Driver circuits were also designed to modulate the image intensifier photocathode at 50 Vpp, and four laser diodes with a combined power output of 320 mW, both over a frequency range of 10-100 MHz. The DDS, laser, and image intensifier response are characterised. A unique method of measuring the image intensifier optical modulation response is developed, requiring the construction of a pico-second pulsed laser source. This characterisation revealed deficiencies in the measured responses, which were mitigated through hardware modifications where possible. The effects of remaining imperfections, such as modulation waveform harmonics and image intensifier irising, can be calibrated and removed from the range measurements during software processing using the characterisation data. Finally, a digital method of generating the high frequency modulation signals using a FPGA to replace the analogue DDS is developed, providing a highly integrated solution, reducing the complexity, and enhancing flexibility. In addition, a novel modulation coding technique is developed to remove the undesirable influence of waveform harmonics from the range measurement without extending the acquisition time. When combined with a proposed modification to the laser illumination source, the digital system can enhance range measurement precision and linearity. From this work, a flexible full-field image ranging system is successfully realised. The system is demonstrated operating in a high precision mode with sub-millimetre depth resolution, and also in a high speed mode operating at video update rates (25 fps), in both cases providing high (512 512) spatial resolution over distances of several metres
    • 

    corecore