47 research outputs found

    Interlacing in atomic resolution scanning transmission electron microscopy

    Get PDF
    Fast frame-rates are desirable in scanning transmission electron microscopy for a number of reasons: controlling electron beam dose, capturing in-situ events or reducing the appearance of scan distortions. Whilst several strategies exist for increasing frame-rates, many impact image quality or require investment in advanced scan hardware. Here we present an interlaced imaging approach to achieve minimal loss of image quality with faster frame-rates that can be implemented on many existing scan controllers. We further demonstrate that our interlacing approach provides the best possible strain precision for a given electron dose compared with other contemporary approaches

    {HDR} Denoising and Deblurring by Learning Spatio-temporal Distortion Model

    Get PDF
    We seek to reconstruct sharp and noise-free high-dynamic range (HDR) video from a dual-exposure sensor that records different low-dynamic range (LDR) information in different pixel columns: Odd columns provide low-exposure, sharp, but noisy information; even columns complement this with less noisy, high-exposure, but motion-blurred data. Previous LDR work learns to deblur and denoise (DISTORTED->CLEAN) supervised by pairs of CLEAN and DISTORTED images. Regrettably, capturing DISTORTED sensor readings is time-consuming; as well, there is a lack of CLEAN HDR videos. We suggest a method to overcome those two limitations. First, we learn a different function instead: CLEAN->DISTORTED, which generates samples containing correlated pixel noise, and row and column noise, as well as motion blur from a low number of CLEAN sensor readings. Second, as there is not enough CLEAN HDR video available, we devise a method to learn from LDR video in-stead. Our approach compares favorably to several strong baselines, and can boost existing methods when they are re-trained on our data. Combined with spatial and temporal super-resolution, it enables applications such as re-lighting with low noise or blur

    08291 Abstracts Collection -- Statistical and Geometrical Approaches to Visual Motion Analysis

    Get PDF
    From 13.07.2008 to 18.07.2008, the Dagstuhl Seminar 08291 ``Statistical and Geometrical Approaches to Visual Motion Analysis\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general

    A variational method for dejittering large fluorescence line scanner images

    Get PDF
    International audienceWe propose a variational method dedicated to jitter correction of large fluorescence scanner images. Our method consists in minimizing a global energy functional to estimate a dense displacement field representing the spatially-varying jitter. The computational approach is based on a half-quadratic splitting of the energy functional, which decouples the realignment data term and the dedicated differential-based regularizer. The resulting problem amounts to alternatively solving two convex and nonconvex optimization subproblems with appropriate algorithms. Experimental results on artificial and large real fluorescence images demonstrate that our method is not only capable to handle large displacements but is also efficient in terms of subpixel precision without inducing additional intensity artifacts

    Image Enhancement in Foggy Images using Dark Channel Prior and Guided Filter

    Get PDF
    Haze is very apparent in images shot during periods of bad weather (fog). The image's clarity and readability are both diminished as a result. As part of this work, we suggest a method for improving the quality of the hazy image and for identifying any objects hidden inside it. To address this, we use the picture enhancement techniques of Dark Channel Prior and Guided Filter. The Saliency map is then used to segment the improved image and identify passing vehicles. Lastly, we describe our method for calculating the actual distance in units from a camera-equipped vehicle of an item (another vehicle).Our proposed solution can warn the driver based on the distance to help them prevent an accident. Our suggested technology improves images and accurately detects vehicles nearly 100% of the time

    Biologically inspired composite image sensor for deep field target tracking

    Get PDF
    The use of nonuniform image sensors in mobile based computer vision applications can be an effective solution when computational burden is problematic. Nonuniform image sensors are still in their infancy and as such have not been fully investigated for their unique qualities nor have they been extensively applied in practice. In this dissertation a system has been developed that can perform vision tasks in both the far field and the near field. In order to accomplish this, a new and novel image sensor system has been developed. Inspired by the biological aspects of the visual systems found in both falcons and primates, a composite multi-camera sensor was constructed. The sensor provides for expandable visual range, excellent depth of field, and produces a single compact output image based on the log-polar retinal-cortical mapping that occurs in primates. This mapping provides for scale and rotational tolerant processing which, in turn, supports the mitigation of perspective distortion found in strict Cartesian based sensor systems. Furthermore, the scale-tolerant representation of objects moving on trajectories parallel to the sensor\u27s optical axis allows for fast acquisition and tracking of objects moving at high rates of speed. In order to investigate how effective this combination would be for object detection and tracking at both near and far field, the system was tuned for the application of vehicle detection and tracking from a moving platform. Finally, it was shown that the capturing of license plate information in an autonomous fashion could easily be accomplished from the extraction of information contained in the mapped log-polar representation space. The novel composite log-polar deep-field image sensor opens new horizons for computer vision. This current work demonstrates features that can benefit applications beyond the high-speed vehicle tracking for drivers assistance and license plate capture. Some of the future applications envisioned include obstacle detection for high-speed trains, computer assisted aircraft landing, and computer assisted spacecraft docking
    corecore