81 research outputs found
Application of MFBD Algorithms to Image Reconstruction Under Anisoplanatic Conditions
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function.
I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions
Inverse problems in astronomical and general imaging
The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees.
All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis.
An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary.
Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques.
Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined.
Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets.
The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated.
Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics.
One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object
Modern optical astronomy: technology and impact of interferometry
The present `state of the art' and the path to future progress in high
spatial resolution imaging interferometry is reviewed. The review begins with a
treatment of the fundamentals of stellar optical interferometry, the origin,
properties, optical effects of turbulence in the Earth's atmosphere, the
passive methods that are applied on a single telescope to overcome atmospheric
image degradation such as speckle interferometry, and various other techniques.
These topics include differential speckle interferometry, speckle spectroscopy
and polarimetry, phase diversity, wavefront shearing interferometry,
phase-closure methods, dark speckle imaging, as well as the limitations imposed
by the detectors on the performance of speckle imaging. A brief account is
given of the technological innovation of adaptive-optics (AO) to compensate
such atmospheric effects on the image in real time. A major advancement
involves the transition from single-aperture to the dilute-aperture
interferometry using multiple telescopes. Therefore, the review deals with
recent developments involving ground-based, and space-based optical arrays.
Emphasis is placed on the problems specific to delay-lines, beam recombination,
polarization, dispersion, fringe-tracking, bootstrapping, coherencing and
cophasing, and recovery of the visibility functions. The role of AO in
enhancing visibilities is also discussed. The applications of interferometry,
such as imaging, astrometry, and nulling are described. The mathematical
intricacies of the various `post-detection' image-processing techniques are
examined critically. The review concludes with a discussion of the
astrophysical importance and the perspectives of interferometry.Comment: 65 pages LaTeX file including 23 figures. Reviews of Modern Physics,
2002, to appear in April issu
Computational Imaging and its Application
Traditional optical imaging systems have constrained angular and spatial resolution, depth of field, field of view, tolerance to aberrations and environmental conditions, and other image quality limitations. Computational imaging provided an opportunity to create new functionality and improve the performance of imaging systems by encoding the information optically and decoding it computationally. The design of a computational imaging system balances hardware costs and the accuracy and complexity of the algorithms. In this thesis, two computational imaging systems are presented: Randomized Aperture Imaging and Laser Suppression Imaging. The former system increases the angular resolution of telescopes by replacing a continuous primary mirror with an array of light-weight small mirror elements, which potentially allows telescopes to have very large diameter at a reduced cost. The latter imaging system protects camera sensors from laser effects such as dazzle by use of a phase coded pupil plane mask. Machine learning and deep learning based algorithms were investigated to restore high-fidelity images from the coded acquisitions. The proposed imaging systems are verified by experiment and numerical modeling, and improved performances are demonstrated in comparison with the state-of-the-art
Polarimeter Blind Deconvolution Using Image Diversity
This research presents an algorithm that improves the ability to view objects using an electro-optical imaging system with at least one polarization sensitive channel in addition to the primary channel. An innovative algorithm for detection and estimation of the defocus aberration present in an image is also developed. Using a known defocus aberration, an iterative polarimeter deconvolution algorithm is developed using a generalized expectation-maximization (GEM) model. The polarimeter deconvolution algorithm is extended to an iterative polarimeter multiframe blind deconvolution (PMFBD) algorithm with an unknown aberration. Using both simulated and laboratory images, the results of the new PMFBD algorithm clearly outperforms an RL-based MFBD algorithm. The convergence rate is significantly faster with better fidelity of reproduction of the targets. Clearly, leveraging polarization data in electro-optical imaging systems has the potential to significantly improve the ability to resolve objects and, thus, improve Space Situation Awareness
Electrical and Computer Engineering Annual Report 2016
Faculty Directory Faculty Highlights Faculty Fellow Program Multidisciplinary Research Fills Critical Needs Better, Faster Technology Metamaterials: Searching for the Perfect Lens The Nontraditional Power of Demand Dispatch Space, Solar Power\u27s Next Frontier Kit Cischke, Award-Winning Senior Lecturer Faculty Publications ECE Academy Class of 2016 Staff Profile: Michele Kamppinen For the Love of Teaching: Jenn Winikus Graduate Student Highlights Undergraduate Student Highlights External Advisory Committee Contracts and Grants Department Statistics AAES National Engineering Awardhttps://digitalcommons.mtu.edu/ece-annualreports/1002/thumbnail.jp
Advanced sparse optimization algorithms for interferometric imaging inverse problems in astronomy
In the quest to produce images of the sky at unprecedented resolution with high
sensitivity, new generation of astronomical interferometers have been designed. To
meet the sensing capabilities of these instruments, techniques aiming to recover the
sought images from the incompletely sampled Fourier domain measurements need to
be reinvented. This goes hand-in-hand with the necessity to calibrate the measurement modulating unknown effects, which adversely affect the image quality, limiting
its dynamic range. The contribution of this thesis consists in the development of
advanced optimization techniques tailored to address these issues, ranging from radio
interferometry (RI) to optical interferometry (OI).
In the context of RI, we propose a novel convex optimization approach for full polarization imaging relying on sparsity-promoting regularizations. Unlike standard RI
imaging algorithms, our method jointly solves for the Stokes images by enforcing the
polarization constraint, which imposes a physical dependency between the images.
These priors are shown to enhance the imaging quality via various performed numerical studies. The proposed imaging approach also benefits from its scalability to handle
the huge amounts of data expected from the new instruments. When it comes to deal
with the critical and challenging issues of the direction-dependent effects calibration,
we further propose a non-convex optimization technique that unifies calibration and
imaging steps in a global framework, in which we adapt the earlier developed imaging
method for the imaging step. In contrast to existing RI calibration modalities, our
method benefits from well-established convergence guarantees even in the non-convex
setting considered in this work and its efficiency is demonstrated through several
numerical experiments.
Last but not least, inspired by the performance of these methodologies and drawing
ideas from them, we aim to solve image recovery problem in OI that poses its own
set of challenges primarily due to the partial loss of phase information. To this end,
we propose a sparsity regularized non-convex optimization algorithm that is equipped
with convergence guarantees and is adaptable to both monochromatic and hyperspectral OI imaging. We validate it by presenting the simulation results
Recommended from our members
Interferometric Methods
Future radio telescopes promise great advances in resolution and sensitivity. These
include the Square Kilometer Array, a two array instrument, in South Africa and Australia. Similarly, the next
generation Very Large Array (ngVLA) is being designed for construction in
North America. These arrays all promise exceptional advances in sensitivity,
angular resolution, and survey speed. The SKA and ngVLA are both specified to
have sensitivities at the level of Jy's. The SKA-Low instrument will consist
of a huge number of dipoles antennas in Australia which is pushing the bounds of
current FX correlator technology with scaling, where is the
number of antennas. The design proposals for these instruments include a dense
core of antennas, necessitating advances in imaging methods for these very
dense cores versus more traditionally sparse instruments.
Another ambitious experiment is the Hydrogen Epoch of Reionisation Array (HERA) in
South Africa which hopes to make the first direct detection of the Epoch of Reionisation
through the red-shifted H{\sc i} signal
which is a factor of smaller than the thermal-like noise.
In this thesis, these problems are tackled by re-examining the underlying
principles of interferometry. The first working
example of a direct imaging correlator is presented which allows images to be
formed directly from the voltages off each antenna in a dense array, without an
expensive cross-correlation operation as is typically required. A detailed discussion
is given of how standard steps in interferometric imaging differ in this new
scheme, including calibration. Additionally the first wide field direct imaging
correlator is presented, which allows the problems of non-coplanarity to be
dealt with for both sparse and dense arrays in a very efficient manner on modern GPU compute hardware. These are, to the best of the authors knowledge, the only working implementations of
a direct imaging correlator for generic arrays with no restrictions on the geometry of the
array or homogeneity of constituent receiver elements. These new approaches have been published
in the scientific literature as discussed in the Declaration.
Moving on from this, the closure phase bispectrum is presented as a way of uncovering
the cosmological Epoch of Reionisation signal from the H{\sc i} line. This is using the
HERA telescope, which consists of a dense core of parabolic antennas in a highly redundant layout.
A data reduction and processing pipeline for the HERA telescope is constructed and presented, for use with the
bispectrum. Initial results towards a cosmologial limit are reported.
The HERA telescope relies on redundancy in its antenna elements for its calibration
and measurement strategy. The bispectrum with its unique mathematical propeties, in combination with forward modelling, is shown to be a
potent tool for probing departures from the assumed reudundancy. It is shown, through
this method, that HERA
suffers significant direction-dependent non-redundancies in the dataset used for our analysis,
which are extremely difficult to calibrate out.
Finally, the problem of wide-field imaging in next generation arrays is tackled
through the development and implementation of a new scheme of wide field
imaging. This uses a new method of parallelising the
problem of wide-field imaging, and is intended for use with the very large
datasets that will be produced by upcoming instruments. Two schemes are introduced: -towers, and
Improved -towers. The latter generalises the former in combination with
advances in optimal convolution theory for the radio astronomy ``gridding'' problem.
The theory behind this approach is explored, and a high performance implementation is presented for
-towers and Improved -stacking within Improved -towers.ARM Ltd iCase Sponsorshi
- …