591 research outputs found
Quantifying the 2.5D imaging performance of digital holographic systems
Digital holographic systems are a class of two step, opto-numerical, three-dimensional imaging techniques. The role of the digital camera in limiting the resolution and field of view of the reconstructed image, and the interaction of these limits with a general optical system is poorly understood. The linear canonical transform describes any optical system consisting of lenses and/or free space in a unified manner. Expressions derived using it are parametrised in terms of the parameters of the optical system, as well as those of the digital camera: aperture size, pixel size and pixel pitch. We develop rules of thumb for selecting an optical system to minimise mean squared error for given input and digital camera parameters. In the limit, our results constitute a point spread function analysis. The results presented in this paper will allow digital holography practitioners to select an optical system to maximise the quality of their reconstructed image using a priori knowledge of the camera and object
Near-Field Radio Holography of Large Reflector Antennas
We summarise the mathematical foundation of the holographic method of
measuring the reflector profile of an antenna or radio telescope. In
particular, we treat the case, where the signal source is located at a finite
distance from the antenna under test, necessitating the inclusion of the
so-called Fresnel field terms in the radiation integrals. We assume a ``full
phase'' system with reference receiver to provide the reference phase. We
describe in some detail the hardware and software implementation of the system
used for the holographic measurement of the 12m ALMA prototype submillimeter
antennas. We include a description of the practicalities of a measurement and
surface setting. The results for both the VertexRSI and AEC
(Alcatel-EIE-Consortium) prototype ALMA antennas are presented.Comment: 14 pages, 14 figures, to appear in IEEE Antennas and Propagation
Magazine, Vol. 49, No. 5, October 2007. Version 2 includes nice mug-shots of
the author
Improving reconstructions of digital holograms
Digital holography is a two step process of recording a hologram on an electronic
sensor and reconstructing it numerically. This thesis makes a number of contri-
butions to the second step of this process. These can be split into two distinct
parts: A) speckle reduction in reconstructions of digital holograms (DHs), and
B) modeling and overcoming partial occlusion e®ects in reconstructions of DHs,
and using occlusions to reduce the effects of the twin image in reconstructions of
DHs. Part A represents the major part of this thesis. Speckle reduction forms an
important step in many digital holographic applications and we have developed
a number of techniques that can be used to reduce its corruptive effect in recon-
structions of DHs. These techniques range from 3D filtering of DH reconstructions
to a technique that filters in the Fourier domain of the reconstructed DH. We have
also investigated the most commonly used industrial speckle reduction technique
- wavelet filters. In Part B, we investigate the nature of opaque and non-opaque
partial occlusions. We motivate this work by trying to ¯nd a subset of pixels
that overcome the effects of a partial occlusion, thus revealing otherwise hidden
features on an object captured using digital holography. Finally, we have used an
occlusion at the twin image plane to completely remove the corrupting effect of
the out-of-focus twin image on reconstructions of DHs
Spectral LADAR: Active Range-Resolved Imaging Spectroscopy
Imaging spectroscopy using ambient or thermally generated optical sources is a well developed technique for capturing two dimensional images with high per-pixel spectral resolution. The per-pixel spectral data is often a sufficient sampling of a material's backscatter spectrum to infer chemical properties of the constituent material to aid in substance identification. Separately, conventional LADAR sensors use quasi-monochromatic laser radiation to create three dimensional images of objects at high angular resolution, compared to RADAR. Advances in dispersion engineered photonic crystal fibers in recent years have made high spectral radiance optical supercontinuum sources practical, enabling this study of Spectral LADAR, a continuous polychromatic spectrum augmentation of conventional LADAR. This imaging concept, which combines multi-spectral and 3D sensing at a physical level, is demonstrated with 25 independent and parallel LADAR channels and generates point cloud images with three spatial dimensions and one spectral dimension.
The independence of spectral bands is a key characteristic of Spectral LADAR. Each spectral band maintains a separate time waveform record, from which target parameters are estimated. Accordingly, the spectrum computed for each backscatter reflection is independently and unambiguously range unmixed from multiple target reflections that may arise from transmission of a single panchromatic pulse.
This dissertation presents the theoretical background of Spectral LADAR, a shortwave infrared laboratory demonstrator system constructed as a proof-of-concept prototype, and the experimental results obtained by the prototype when imaging scenes at stand off ranges of 45 meters. The resultant point cloud voxels are spectrally classified into a number of material categories which enhances object and feature recognition. Experimental results demonstrate the physical level combination of active backscatter spectroscopy and range resolved sensing to produce images with a level of complexity, detail, and accuracy that is not obtainable with data-level registration and fusion of conventional imaging spectroscopy and LADAR.
The capabilities of Spectral LADAR are expected to be useful in a range of applications, such as biomedical imaging and agriculture, but particularly when applied as a sensor in unmanned ground vehicle navigation. Applications to autonomous mobile robotics are the principal motivators of this study, and are specifically addressed
Recommended from our members
Short-Range Millimeter-Wave Sensing and Imaging: Theory, Experiments and Super-Resolution Algorithms
Recent advancements in silicon technology offer the possibility of realizing low-cost and highly integrated radar sensor and imaging systems in mm-wave (between 30 and 300 GHz) and beyond. Such active short-range mm-wave systems have a wide range of applications including medical imaging, security scanning, autonomous vehicle navigation, and human gesture recognition. Moving to higher frequencies provides us with the spectral and spatial degrees of freedom that we need for high resolution imaging and sensing application. Increased bandwidth availability enhances range resolution by increasing the degrees of freedom in the time-frequency domain. Cross-range resolution is enhanced by the increase in the number of spatial degrees of freedom for a constrained form factor. The focus of this thesis is to explore system design and algorithmic development to utilize the available degrees of freedom in mm-wave frequencies in order to realize imaging and sensing capabilities under cost, complexity and form factor constraints. We first consider the fundamental problem of estimating frequencies and gains in a noisy mixture of sinusoids. This problem is ubiquitous in radar sensing applications, including target range and velocity estimation using standard radar waveforms (e.g., chirp or stepped frequency continuous wave), and direction of arrival estimation using an array of antenna elements. We have developed a fast and robust iterative algorithm for super-resolving the frequencies and gains, and have demonstrated near-optimal performance in terms of frequency estimation accuracy by benchmarking against the Cramer Rao Bound in various scenarios.Next, we explore cross-range radar imaging using an array of antenna elements under severe cost, complexity and form factor constraints. We show that we must account for such constraints in a manner that is quite different from that of conventional radar, and introduce new models and algorithms validated by experimental results. In order to relax the synchronization requirements across multiple transceiver elements we have considered the monostatic architecture in which only the co-located elements are synchronized. We investigate the impact of sparse spatial sampling by reducing the number of array antenna elements, and show that ``sparse monostatic'' architecture leads to grating lobe artifact, which introduces ambiguity in the detection/estimation of point targets in the scene. At short ranges, however, targets are ``low-pass'' and contain extended features (consisting of a continuum of points), and are not well-modeled by a small number of point scatterers. We introduce the concept of ``spatial aggregation,'' which provides the flexibility of constructing a dictionary in which each atom corresponds to a collection of point scatterers, and demonstrate its effectiveness in suppressing the grating lobes and preserving the information in the scene.Finally, we take a more fundamental and systematic approach based on singular decomposition of the imaging system, to understand the information capacity and the limits of performance for various geometries. In general, a scene can be described by an infinite number of independent parameters. However, the number of independent parameters that can be measured through an imaging system (also known as the degrees of freedom of the system) is typically finite, and is constrained by the geometry and wavelength. We introduce a measure to predict the number of spatial degrees of freedom of 1D imaging systems for both monostatic and multistatic array architectures. Our analysis reveals that there is no fundamental benefit in multistatic architecture compared to monostatic in terms of achievable degrees of freedom. The real benefit of multistatic architecture from a practical point of view, is in being able to design sparse transmit and receive antenna arrays that are capable of achieving the available degrees of freedom. Moreover, our analytical framework opens up new avenues to investigate image formation techniques that aim to reconstruct the reflectivity function of the scene by solving an inverse scattering problem, and provides crucial insights on the achievable resolution
Robust Positioning in the Presence of Multipath and NLOS GNSS Signals
GNSS signals can be blocked and reflected by nearby objects, such as buildings, walls, and vehicles. They can also be reflected by the ground and by water. These effects are the dominant source of GNSS positioning errors in dense urban environments, though they can have an impact almost anywhere. Non- line-of-sight (NLOS) reception occurs when the direct path from the transmitter to the receiver is blocked and signals are received only via a reflected path. Multipath interference occurs, as the name suggests, when a signal is received via multiple paths. This can be via the direct path and one or more reflected paths, or it can be via multiple reflected paths. As their error characteristics are different, NLOS and multipath interference typically require different mitigation techniques, though some techniques are applicable to both. Antenna design and advanced receiver signal processing techniques can substantially reduce multipath errors. Unless an antenna array is used, NLOS reception has to be detected using the receiver's ranging and carrier-power-to-noise-density ratio (C/N0) measurements and mitigated within the positioning algorithm. Some NLOS mitigation techniques can also be used to combat severe multipath interference. Multipath interference, but not NLOS reception, can also be mitigated by comparing or combining code and carrier measurements, comparing ranging and C/N0 measurements from signals on different frequencies, and analyzing the time evolution of the ranging and C/N0 measurements
Beyond Nyquist sampling: A cost-based approach
A sampling-based framework for finding the optimal representation of a finite energy optical field using a finite number of bits is presented. For a given bit budget, we determine the optimum number and spacing of the samples in order to represent the field with as low error as possible. We present the associated performance bounds as trade-off curves between the error and the cost budget. In contrast to common practice, which often treats sampling and quantization separately, we explicitly focus on the interplay between limited spatial resolution and limited amplitude accuracy, such as whether it is better to take more samples with lower amplitude accuracy or fewer samples with higher accuracy. We illustrate that in certain cases sampling at rates different from the Nyquist rate is more efficient. © 2013 Optical Society of America
Computational imaging and automated identification for aqueous environments
Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2011Sampling the vast volumes of the ocean requires tools capable of observing from a distance while retaining detail necessary for biology and ecology, ideal for optical methods.
Algorithms that work with existing SeaBED AUV imagery are developed, including habitat classi fication with bag-of-words models and multi-stage boosting for rock sh detection.
Methods for extracting images of sh from videos of longline operations are demonstrated.
A prototype digital holographic imaging device is designed and tested for quantitative
in situ microscale imaging. Theory to support the device is developed, including particle
noise and the effects of motion. A Wigner-domain model provides optimal settings and
optical limits for spherical and planar holographic references.
Algorithms to extract the information from real-world digital holograms are created.
Focus metrics are discussed, including a novel focus detector using local Zernike moments.
Two methods for estimating lateral positions of objects in holograms without reconstruction
are presented by extending a summation kernel to spherical references and using a local
frequency signature from a Riesz transform. A new metric for quickly estimating object
depths without reconstruction is proposed and tested. An example application, quantifying
oil droplet size distributions in an underwater plume, demonstrates the efficacy of the
prototype and algorithms.Funding was provided by NOAA Grant #5710002014, NOAA NMFS Grant #NA17RJ1223, NSF Grant #OCE-0925284, and NOAA Grant #NA10OAR417008
- …