157 research outputs found

    Blind deconvolution of depth-of-field limited full-field lidar data by determination of focal parameters

    Get PDF
    We present a new two-stage method for parametric spatially variant blind deconvolution of full-field Amplitude Modulated Continuous Wave lidar image pairs taken at different aperture settings subject to limited depth of field. A Maximum Likelihood based focal parameter determination algorithm uses range information to reblur the image taken with a smaller aperture size to match the large aperture image. This allows estimation of focal parameters without prior calibration of the optical setup and produces blur estimates which have better spatial resolution and less noise than previous depth from defocus (DFD) blur measurement algorithms. We compare blur estimates from the focal parameter determination method to those from Pentland's DFD method, Subbarao's S-Transform method and estimates from range data/the sampled point spread function. In a second stage the estimated focal parameters are applied to deconvolution of total integrated intensity lidar images improving depth of field. We give an example of application to complex domain lidar images and discuss the trade-off between recovered amplitude texture and sharp range estimates

    Extending AMCW lidar depth-of-field using a coded aperture

    Get PDF
    By augmenting a high resolution full-field Amplitude Modulated Continuous Wave lidar system with a coded aperture, we show that depth-of-field can be extended using explicit, albeit blurred, range data to determine PSF scale. Because complex domain range-images contain explicit range information, the aperture design is unconstrained by the necessity for range determination by depth-from-defocus. The coded aperture design is shown to improve restoration quality over a circular aperture. A proof-of-concept algorithm using dynamic PSF determination and spatially variant Landweber iterations is developed and using an empirically sampled point spread function is shown to work in cases without serious multipath interference or high phase complexity

    Ameliorating Systematic Errors in Full-Field AMCW Lidar

    Get PDF
    This thesis presents an analysis of systematic error in full-field amplitude modulated continuous wave range-imaging systems. The primary focus is on the mixed pixel/multipath interference problem, with digressions into defocus restoration, irregular phase sampling and the systematic phase perturbations introduced by random noise. As an integral part of the thesis, a detailed model of signal formation is developed, that models noise statistics not included in previously reported models. Prior work on the mixed pixel/multipath interference problem has been limited to detection and removal of perturbed measurements or partial amelioration using spatial information, such as knowledge of the spatially variant scattering point spread function, or raytracing using an assumption of Lambertian reflection. Furthermore, prior art has only used AMCW range measurements at a single modulation frequency. In contrast, in this thesis, by taking multiple measurements at different modulation frequencies with known ratio-of-integers frequency relationships, a range of new closed-form and lookup table based inversion and bounding methods are explored. These methods include: sparse spike train deconvolution based multiple return separation, a closed-form inverse using attenuation ratios and a normalisation based lookup table method that uses a new property we term the characteristic measurement. Other approaches include a Cauchy distribution based model for backscattering sources which are range-diffuse, like fog or hair. Novel bounding methods are developed using the characteristic measurement and attenuation ratios on relative intensity, relative phase and phase perturbutation. A detailed noise and performance analysis is performed of the characteristic measurement lookup table method and the bounding methods using simulated data. Experiments are performed using the University of Waikato Heterodyne range-imager, the Canesta XZ-422 and the Mesa Imaging Swissranger 4000 in order to demonstrate the performance of the lookup table method. The lookup table method is found to provide an order of magnitude improvement in ranging accuracy, albeit at the expense of ranging precision

    Improving Range Estimation of a 3D FLASH LADAR via Blind Deconvolution

    Get PDF
    The purpose of this research effort is to improve and characterize range estimation in a three-dimensional FLASH LAser Detection And Ranging (3D FLASH LADAR) by investigating spatial dimension blurring effects. The myriad of emerging applications for 3D FLASH LADAR both as primary and supplemental sensor necessitate superior performance including accurate range estimates. Along with range information, this sensor also provides an imaging or laser vision capability. Consequently, accurate range estimates would also greatly aid in image quality of a target or remote scene under interrogation. Unlike previous efforts, this research accounts for pixel coupling by defining the range image mathematical model as a convolution between the system spatial impulse response and the object (target or remote scene) at a particular range slice. Using this model, improved range estimation is possible by object restoration from the data observations. Object estimation is principally performed by deriving a blind deconvolution Generalized Expectation Maximization (GEM) algorithm with the range determined from the estimated object by a normalized correlation method. Theoretical derivations and simulation results are verified with experimental data of a bar target taken from a 3D FLASH LADAR system in a laboratory environment. Additionally, among other factors, range separation estimation variance is a function of two LADAR design parameters (range sampling interval and transmitted pulse-width), which can be optimized using the expected range resolution between two point sources. Using both CRB theory and an unbiased estimator, an investigation is accomplished that finds the optimal pulse-width for several range sampling scenarios using a range resolution metric

    Photon-efficient super-resolution laser radar

    Get PDF
    The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.Samsung Scholarship FoundationNational Science Foundation (U.S.) (Grant 1161413)National Science Foundation (U.S.) (Grant 1422034

    Motion blur in digital images - analys, detection and correction of motion blur in photogrammetry

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This thesis proves the negative affect that blurred images have on photogrammetric processing. It shows that small amounts of blur do have serious impacts on target detection and that it slows down processing speed due to the requirement of human intervention. Larger blur can make an image completely unusable and needs to be excluded from processing. To exclude images out of large image datasets an algorithm was developed. The newly developed method makes it possible to detect blur caused by linear camera displacement. The method is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values of the same dataset. This algorithm enables the exclusion of blurred images and subsequently allows photogrammetric processing without them. However, it is also possible to use deblurring techniques to restor blurred images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears muddy and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images for deblurring may limit the application. Another method to enhance the image is the unsharp mask method, which improves images significantly and makes photogrammetric processing more successful. However, deblurring of images needs to focus on geometric correct deblurring to assure geometric correct measurements. Furthermore, a novel edge shifting approach was developed which aims to do geometrically correct deblurring. The idea of edge shifting appears to be promising but requires more advanced programming

    Statistical Methods for Polarimetric Imagery

    Get PDF
    Estimation theory is applied to a physical model of incoherent polarized light to address problems in polarimetric image registration, restoration, and analysis for electro-optical imaging systems. In the image registration case, the Cramer-Rao lower bound on unbiased joint estimates of the registration parameters and the underlying scene is derived, simplified using matrix methods, and used to explain the behavior of multi-channel linear polarimetric imagers. In the image restoration case, a polarimetric maximum likelihood blind deconvolution algorithm is derived and tested using laboratory and simulated imagery. Finally, a principal components analysis is derived for polarization imaging systems. This analysis expands upon existing research by including an allowance for partially polarized and unpolarized light

    Sixteenth International Laser Radar Conference, part 2

    Get PDF
    Given here are extended abstracts of papers presented at the 16th International Laser Radar Conference, held in Cambridge, Massachusetts, July 20-24, 1992. Topics discussed include the Mt. Pinatubo volcanic dust laser observations, global change, ozone measurements, Earth mesospheric measurements, wind measurements, imaging, ranging, water vapor measurements, and laser devices and technology

    AI for time-resolved imaging: from fluorescence lifetime to single-pixel time of flight

    Get PDF
    Time-resolved imaging is a field of optics which measures the arrival time of light on the camera. This thesis looks at two time-resolved imaging modalities: fluorescence lifetime imaging and time-of-flight measurement for depth imaging and ranging. Both of these applications require temporal accuracy on the order of pico- or nanosecond (10−12 − 10−9s) scales. This demands special camera technology and optics that can sample light-intensity extremely quickly, much faster than an ordinary video camera. However, such detectors can be very expensive compared to regular cameras while offering lower image quality. Further, information of interest is often hidden (encoded) in the raw temporal data. Therefore, computational imaging algorithms are used to enhance, analyse and extract information from time-resolved images. "A picture is worth a thousand words". This describes a fundamental blessing and curse of image analysis: images contain extreme amounts of data. Consequently, it is very difficult to design algorithms that encompass all the possible pixel permutations and combinations that can encode this information. Fortunately, the rise of AI and machine learning (ML) allow us to instead create algorithms in a data-driven way. This thesis demonstrates the application of ML to time-resolved imaging tasks, ranging from parameter estimation in noisy data and decoding of overlapping information, through super-resolution, to inferring 3D information from 1D (temporal) data

    A Nonparametric Approach to Segmentation of Ladar Images

    Get PDF
    The advent of advanced laser radar (ladar) systems that record full-waveform signal data has inspired numerous inquisitions which aspire to extract additional, previously unavailable, information about the illuminated scene from the collected data. The quality of the information, however, is often related to the limitations of the ladar camera used to collect the data. This research project uses full-waveform analysis of ladar signals, and basic principles of optics, to propose a new formulation for an accepted signal model. A new waveform model taking into account backscatter reflectance is the key to overcoming specific deficiencies of the ladar camera at hand, namely the ability to discern pulse-spreading effects of elongated targets. A concert of non-parametric statistics and familiar image processing methods are used to calculate the orientation angle of the illuminated objects, and the deficiency of the hardware is circumvented. Segmentation of the various ladar images performed as part of the angle estimation, and this is shown to be a new and effective strategy for analyzing the output of the AFIT ladar camera
    corecore