39 research outputs found

    Eliminating Scale Drift in Monocular SLAM Using Depth from Defocus

    Full text link
    © 2017 IEEE. This letter presents a novel approach to correct errors caused by accumulated scale drift in monocular SLAM. It is shown that the metric scale can be estimated using information gathered through monocular SLAM and image blur due to defocus. A nonlinear least squares optimization problem is formulated to integrate depth estimates from defocus to monocular SLAM. An algorithm to process the output keyframe and feature location estimates generated by a monocular SLAM algorithm to correct for scale drift at selected local regions of the environment is presented. The proposed algorithm is experimentally evaluated by processing the output of ORB-SLAM to obtain accurate metric scale maps from a monocular camera without any prior knowledge about the scene

    Augmenting visual perception with gaze-contingent displays

    Get PDF
    Cheap and easy to use eye tracking can be used to turn a common display into a gaze-contingent display: a system that can react to the user’s gaze and adjust its content based on where an observer is looking. This can be used to enhance the rendering on screens based on perceptual insights and the knowledge about what is currently seen. This thesis investigates how GCDs can be used to support aspects of depth and colour perception. This thesis presents experiments that investigate the effects of simulated depth of field and chromatic aberration on depth perception. It also investigates how changing the colours surrounding the attended area can be used to influence the perceived colour and how this can be used to increase colour differentiation of colour and potentially increase the perceived gamut of the display. The presented investigations and empirical results lay the foundation for future investigations and development of gaze-contingent technologies, as well as for general applications of colour and depth perception. The results show that GCDs can be used to support the user in tasks that are related to visual perception. The presented techniques could be used to facilitate common tasks like distinguishing the depth of objects in virtual environments or discriminating similar colours in information visualisations.EU Marie Curie Program CIG - 30378

    Optical System Identification for Passive Electro-Optical Imaging

    Full text link
    A statistical inverse-problem approach is presented for jointly estimating camera blur from aliased data of a known calibration target. Specifically, a parametric Maximum Likelihood (ML) PSF estimate is derived for characterizing a camera's optical imperfections through the use of a calibration target in an otherwise loosely controlled environment. The unknown parameters are jointly estimated from data described by a physical forward-imaging model, and this inverse-problem approach allows one to accommodate all of the available sources of information jointly. These sources include knowledge of the forward imaging process, the types and sources of statistical uncertainty, available prior information, and the data itself. The forward model describes a broad class of imaging systems based on a parameterization with a direct mapping between its parameters and physical imaging phenomena. The imaging perspective, ambient light-levels, target-reflectance, detector gain and offset, quantum-efficiency, and read-noise levels are all treated as nuisance parameters. The Cram'{e}r-Rao Bound (CRB) is derived under this joint model, and simulations demonstrate that the proposed estimator achieves near-optimal MSE performance. Finally, the proposed method is applied to experimental data to validate both the fidelity of the forward-models, as well as to establish the utility of the resulting ML estimates for both system identification and subsequent image restoration.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/153395/1/jwleblan_1.pd

    Treatise on Hearing: The Temporal Auditory Imaging Theory Inspired by Optics and Communication

    Full text link
    A new theory of mammalian hearing is presented, which accounts for the auditory image in the midbrain (inferior colliculus) of objects in the acoustical environment of the listener. It is shown that the ear is a temporal imaging system that comprises three transformations of the envelope functions: cochlear group-delay dispersion, cochlear time lensing, and neural group-delay dispersion. These elements are analogous to the optical transformations in vision of diffraction between the object and the eye, spatial lensing by the lens, and second diffraction between the lens and the retina. Unlike the eye, it is established that the human auditory system is naturally defocused, so that coherent stimuli do not react to the defocus, whereas completely incoherent stimuli are impacted by it and may be blurred by design. It is argued that the auditory system can use this differential focusing to enhance or degrade the images of real-world acoustical objects that are partially coherent. The theory is founded on coherence and temporal imaging theories that were adopted from optics. In addition to the imaging transformations, the corresponding inverse-domain modulation transfer functions are derived and interpreted with consideration to the nonuniform neural sampling operation of the auditory nerve. These ideas are used to rigorously initiate the concepts of sharpness and blur in auditory imaging, auditory aberrations, and auditory depth of field. In parallel, ideas from communication theory are used to show that the organ of Corti functions as a multichannel phase-locked loop (PLL) that constitutes the point of entry for auditory phase locking and hence conserves the signal coherence. It provides an anchor for a dual coherent and noncoherent auditory detection in the auditory brain that culminates in auditory accommodation. Implications on hearing impairments are discussed as well.Comment: 603 pages, 131 figures, 13 tables, 1570 reference

    Generalising the ideal pinhole model to multi-pupil imaging for depth recovery

    Get PDF
    This thesis investigates the applicability of computer vision camera models in recovering depth information from images, and presents a novel camera model incorporating a modified pupil plane capable of performing this task accurately from a single image. Standard models, such as the ideal pinhole, suffer a loss of depth information when projecting from the world to an image plane. Recovery of this data enables reconstruction of the original scene as well as object and 3D motion reconstruction. The major contributions of this thesis are the complete characterisation of the ideal pinhole model calibration and the development of a new multi-pupil imaging model which enables depth recovery. A comprehensive analysis of the calibration sensitivity of the ideal pinhole model is presented along with a novel method of capturing calibration images which avoid singularities in image space. Experimentation reveals a higher degree of accuracy using the new calibration images. A novel camera model employing multiple pupils is proposed which, in contrast to the ideal pinhole model, recovers scene depth. The accuracy of the multi-pupil model is demonstrated and validated through rigorous experimentation. An integral property of any camera model is the location of its pupil. To this end, the new model is expanded by generalising the location of the multi-pupil plane, thus enabling superior flexibility over traditional camera models which are confined to positioning the pupil plane to negate particular aberrations in the lens. A key step in the development of the multi-pupil model is the treatment of optical aberrations in the imaging system. The unconstrained location and configuration of the pupil plane enables the determination of optical distortions in the multi-pupil imaging model. A calibration algorithm is proposed which corrects for the optical aberrations. This allows the multi-pupil model to be applied to a multitude of imaging systems regardless of the optical quality of the lens. Experimentation validates the multi-pupil model’s accuracy in accounting for the aberrations and estimating accurate depth information from a single image. Results for object reconstruction are presented establishing the capabilities of the proposed multi-pupil imaging model

    Near Field Electron Ptychography

    Get PDF
    Phase imaging in the Transmission Electron Microscope (TEM) has a long history, from the implementation of off-axis holography in TEM to Differential Phase Contrast (DPC) on the Scanning Transmission Electron Microscopy (STEM). The advent of modern computing has enabled the development of iterative algorithms which attempt to recover a phase image of a specimen from measurements of the way it diffracts an incident electron beam. One of the most successful of these iterative methods is focused probe ptychography, which relies on far field diffraction pattern measurements recorded as the incident beam is scanned through a grid of locations across the specimen. Focused probe ptychography implemented in the STEM has provided the highest resolution images available to date, allows for lens-less setups avoiding the aberrations typical in older STEMs and allows for simultaneous reconstruction of the illumination and specimen. Ptychography is computationally flexible (highly constrained), allowing for additional unknowns other than the phase of the specimen to be recovered, for example positions can be refined during reconstruction. Near field ptychography is a recent variation on ptychography that replaces the far-field diffraction data with diffraction patterns recorded in the near field, or Fresnel, region. It promises to obtain a much larger field of view with fewer diffraction patterns than focused probe ptychography. The main contribution of this thesis is the implementation of a new form of near field ptychography on the Transmission Electron Microscope (TEM), using an etched silicon nitride window to structure the electron beam. Proof-of-concept results show the method quantitatively recovers megapixel phase images from as few as 9 recorded diffraction patterns, compared to many hundreds of diffraction patterns required for focused probe ptychography. Additional sets of results show how near-field ptychography can recover extremely large fields of view, deal effectively with inelastic scattering, and accommodate several sources of uncertainty in the experimental process. Further contributions in the thesis include: experiments and results from visible-light versions of near field ptychography, which explain its limitations and practical application; a description and code for analysis tools that are used to assess phase imaging performance; DigitalMicrograph (DM) code and a data collection workflow to realise TEM-based near-field ptychography; details of the design, realisation and performance of the etched silicon nitride windows; and simulation studies aimed at furthering understanding of the frequency response of the technique. Future work is outlined, focusing on potential applications in a wide range of real-world specimens and improved TEM setups to implement near field ptychography

    Advanced analytical transmission electron microscopy methodologies for the study of the chemical and physical properties of semiconducting nanostructures

    Get PDF
    The structural and chemical properties of Ge-Sb-Te and Si nanowires have been studied by means of Transmission Electron Microscopy techniques. A methodological research has been dedicated to the development of methods, based on the comparison of experimental images with their accurate simulations, to extract quantitative chemical information directly from the image contrast

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Improved methods for functional neuronal imaging with genetically encoded voltage indicators

    Get PDF
    Voltage imaging has the potential to revolutionise neuronal physiology, enabling high temporal and spatial resolution monitoring of sub- and supra-threshold activity in genetically defined cell classes. Before this goal is reached a number of challenges must be overcome: novel optical, genetic, and experimental techniques must be combined to deal with voltage imaging’s unique difficulties. In this thesis three techniques are applied to genetically encoded voltage indicator (GEVI) imaging. First, I describe a multifocal two-photon microscope and present a novel source localisation control and reconstruction algorithm to increase scattering resistance in functional imaging. I apply this microscope to image population and single-cell voltage signals from voltage sensitive fluorescent proteins in the first demonstration of multifocal GEVI imaging. Second, I show that a recently described genetic technique that sparsely labels cortical pyramidal cells enables single-cell resolution imaging in a one-photon widefield imaging configuration. This genetic technique allows simple, high signal-to-noise optical access to the primary excitatory cells in the cerebral cortex. Third, I present the first application of lightfield microscopy to single cell resolution neuronal voltage imaging. This technique enables single-shot capture of dendritic arbours and resolves 3D localised somatic and dendritic voltage signals. These approaches are finally evaluated for their contribution to the improvement of voltage imaging for physiology.Open Acces

    Quantitative Image Simulation and Analysis of Nanoparticles

    Get PDF
    corecore