77 research outputs found
Colour depth-from-defocus incorporating experimental point spread function measurements
Depth-From-Defocus (DFD) is a monocular computer vision technique for creating
depth maps from two images taken on the same optical axis with different intrinsic camera
parameters. A pre-processing stage for optimally converting colour images to monochrome
using a linear combination of the colour planes has been shown to improve the
accuracy of the depth map. It was found that the first component formed using Principal
Component Analysis (PCA) and a technique to maximise the signal-to-noise ratio (SNR)
performed better than using an equal weighting of the colour planes with an additive noise
model. When the noise is non-isotropic the Mean Square Error (MSE) of the depth map
by maximising the SNR was improved by 7.8 times compared to an equal weighting and
1.9 compared to PCA. The fractal dimension (FD) of a monochrome image gives a measure
of its roughness and an algorithm was devised to maximise its FD through colour
mixing. The formulation using a fractional Brownian motion (mm) model reduced the
SNR and thus produced depth maps that were less accurate than using PCA or an equal
weighting. An active DFD algorithm to reduce the image overlap problem has been
developed, called Localisation through Colour Mixing (LCM), that uses a projected colour
pattern. Simulation results showed that LCM produces a MSE 9.4 times lower than equal
weighting and 2.2 times lower than PCA.
The Point Spread Function (PSF) of a camera system models how a point source of
light is imaged. For depth maps to be accurately created using DFD a high-precision PSF
must be known. Improvements to a sub-sampled, knife-edge based technique are presented
that account for non-uniform illumination of the light box and this reduced the
MSE by 25%. The Generalised Gaussian is presented as a model of the PSF and shown to
be up to 16 times better than the conventional models of the Gaussian and pillbox
Computational phase imaging based on intensity transport
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 133-150).Light is a wave, having both an amplitude and a phase. However, optical frequencies are too high to allow direct detection of phase; thus, our eyes and cameras see only real values - intensity. Phase carries important information about a wavefront and is often used for visualization of biological samples, density distributions and surface profiles. This thesis develops new methods for imaging phase and amplitude from multi-dimensional intensity measurements. Tomographic phase imaging of diffusion distributions is described for the application of water content measurement in an operating fuel cell. Only two projection angles are used to detect and localize large changes in membrane humidity. Next, several extensions of the Transport of Intensity technique are presented. Higher order axial derivatives are suggested as a method for correcting nonlinearity, thus improving range and accuracy. To deal with noisy images, complex Kalman filtering theory is proposed as a versatile tool for complex-field estimation. These two methods use many defocused images to recover phase and amplitude. The next technique presented is a single-shot quantitative phase imaging method which uses chromatic aberration as the contrast mechanism. Finally, a novel single-shot complex-field technique is presented in the context of a Volume Holographic Microscopy (VHM). All of these techniques are in the realm of computational imaging, whereby the imaging system and post-processing are designed in parallel.by Laura A. Waller.Ph.D
Range Finding with a Plenoptic Camera
The plenoptic camera enables simultaneous collection of imagery and depth information by sampling the 4D light field. The light field is distinguished from data sets collected by stereoscopic systems because it contains images obtained by an N by N grid of apertures, rather than just the two apertures of the stereoscopic system. By adjusting parameters of the camera construction, it is possible to alter the number of these `subaperture images,\u27 often at the cost of spatial resolution within each. This research examines a variety of methods of estimating depth by determining correspondences between subaperture images. A major finding is that the additional \u27apertures\u27 provided by the plenoptic camera do not greatly improve the accuracy of depth estimation. Thus, the best overall performance will be achieved by a design which maximizes spatial resolution at the cost of angular samples. For this reason, it is not surprising that the performance of the plenoptic camera should be comparable to that of a stereoscopic system of similar scale and specifications. As with stereoscopic systems, the plenoptic camera has its most immediate, realistic applications in the domains of robotic navigation and 3D video collection
Simulation-based Planning of Machine Vision Inspection Systems with an Application to Laser Triangulation
Nowadays, vision systems play a central role in industrial inspection. The experts typically choose the configuration of measurements in such systems empirically. For complex inspections, however, automatic inspection planning is essential. This book proposes a simulation-based approach towards inspection planning by contributing to all components of this problem: simulation, evaluation, and optimization. As an application, inspection of a complex cylinder head by laser triangulation is studied
Project Tech Top study of lunar, planetary and solar topography Final report
Data acquisition techniques for information on lunar, planetary, and solar topograph
Optical instrumentation for fluid flow in gas turbines
Both a novel shearing interferometer and the first demonstration of particle image velocimetry
(PIV) to the stator-rotor gap of a spinning turbine cascade are presented. Each of these
techniques are suitable for measuring gas turbine representative flows.
The simple interferometric technique has been demonstrated on a compressor representative
flow in a 2-D wind tunnel. The interferometer has obvious limitations, as it requires a clear line
of sight for the integration of refractive index along an optical path. Despite this, it is a credible
alternative to schlieren or shadowgraph in that it provides both qualitative visualisation and a
quantitative measurement of refractive index and the variables to which it is dependent without
the vibration isolation requirements of beam splitting interferometry.
The 2-D PIV measurements have been made in the stator-rotor gap of the MTI high-pressure
turbine stage within DERA's Isentropic Light Piston Facility (lLPF). The measurements were
made at full engine representative conditions adjacent to a rotor spinning at 8200 rpm. This is a
particularly challenging application due to the complex geometry and random and periodic
effects generated as the stator wake interacts with the adjacent spinning rotor. The application is
further complicated due to the transient nature of the facility. The measurements represent a 2-
D, instantaneous, quantitative description of the unsteady flow field and reveal evidence of
shocks and wakes. The estimated accuracy after scaling, timing, particle centroid and particle
lag errors have been considered is ± 5%. Non-smoothed, non-time averaged measurements are
qualitatively compared with a numerical prediction generated using a 2-D unsteady flow solver
(prediction supplied by DERA). A very close agreement has been achieved.
A novel approach to characterising the third component of velocity from the diffraction rings of
a defocusing particle viewed through a single camera has been explored. This 3-D PIV
technique has been demonstrated on a nozzle flow but issues concerning the aberrations of the
curved test section window of the turbine cascade could not be resolved in time for testing on
the facility. Suggestions have been made towards solving this problem.
Recommendations are also made towards the eventual goal of revealing a temporally and
spatially resolved 3-D velocity distribution of the stator wake impinging on the passing rotor
Characterization and Optimization of the new Imaging Fourier Transform Spectrometer GLORIA
This work focuses on the radiometric and spectrometric characterization and optimization of a new imaging Fourier transform spectrometer (IFTS) called Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA). This characterization work helps to better understand the important features of this IFTS instrument - so that scientific data recorded in campaigns can be understood better and help in understanding the current climate change
Single View Modeling and View Synthesis
This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments.
In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm.
Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work.
In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video
Modeling, image processing and attitude estimation of high speed star sensors
Attitude estimation and angular velocity estimation are the most critical components
of a spacecraft's guidance, navigation and control. Usually, an array of tightlycoupled
sensors (star trackers, gyroscopes, sun sensors, magnetometers) is used to
estimate these quantities. The cost (financial, mass, power, time, human resources)
for the integration of these separate sub-systems is a major deterrent towards realizing
the goal of smaller, cheaper and faster to launch spacecrafts/satellites. In this
work, we present a novel stellar imaging system that is capable of estimating attitude
and angular velocities at true update rates of greater than 100Hz, thereby eliminating
the need for a separate star tracker and gyroscope sub-systems.
High image acquisition rates necessitate short integration times and large optical
apertures, thereby adding mass and volume to the sensor. The proposed high
speed sensor overcomes these difficulties by employing light amplification technologies
coupled with fiber optics. To better understand the performance of the sensor, an
electro-optical model of the sensor system is developed which is then used to design
a high-fidelity night sky image simulator. Novel star position estimation algorithms
based on a two-dimensional Gaussian fitting to the star pixel intensity profiles are
then presented. These algorithms are non-iterative, perform local background estimation
in the vicinity of the star and lead to significant improvements in the star
centroid determination. Further, a new attitude determination algorithm is developed that uses the inter-star angles of the identified stars as constraints to recompute
the body measured vectors and provide a higher accuracy estimate of the attitude
as compared to existing methods. The spectral response of the sensor is then used
to develop a star catalog generation method that results in a compact on-board star
catalog. Finally, the use of a fiber optic faceplate is proposed as an additional means
of stray light mitigation for the system. This dissertation serves to validate the conceptual
design of the high update rate star sensor through analysis, hardware design,
algorithm development and experimental testing
- …