5,389 research outputs found

    Optimal Multi-view Correction of Local Affine Frames

    Get PDF
    The technique requires the epipolar geometry to be pre-estimated between each image pair. It exploits the constraints which the camera movement implies, in order to apply a closed-form correction to the parameters of the input affinities. Also, it is shown that the rotations and scales obtained by partially affine-covariant detectors, e.g., AKAZE or SIFT, can be completed to be full affine frames by the proposed algorithm. It is validated both in synthetic experiments and on publicly available real-world datasets that the method always improves the output of the evaluated affine-covariant feature detectors. As a by-product, these detectors are compared and the ones obtaining the most accurate affine frames are reported. For demonstrating the applicability, we show that the proposed technique as a pre-processing step improves the accuracy of pose estimation for a camera rig, surface normal and homography estimation

    Improvement of Image Alignment Using Camera Attitude Information

    Get PDF
    We discuss a proposed technique for incorporation of information from a variety of sensors in a video imagery processing pipeline. The auxiliary information allows one to simplify computations, effectively reducing the number of independent parameters in the transformation model. The mosaics produced by this technique are adequate for many applications, in particular habitat mapping. The algorithm, demonstrated through simulations and hardware configuration, is described in detai

    Cross-Talk-Free Multi-Color STORM Imaging Using a Single Fluorophore

    Get PDF
    Multi-color stochastic optical reconstruction microscopy (STORM) is routinely performed; however, the various approaches for achieving multiple colors have important caveats. Color cross-talk, limited availability of spectrally distinct fluorophores with optimal brightness and duty cycle, incompatibility of imaging buffers for different fluorophores, and chromatic aberrations impact the spatial resolution and ultimately the number of colors that can be achieved. We overcome these complexities and develop a simple approach for multi-color STORM imaging using a single fluorophore and sequential labelling. In addition, we present a simple and versatile method to locate the same region of interest on different days and even on different microscopes. In combination, these approaches enable cross-talk-free multi-color imaging of sub-cellular structures.Peer ReviewedPostprint (published version

    Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

    Full text link
    The reconstruction of dense 3D models of face geometry and appearance from a single image is highly challenging and ill-posed. To constrain the problem, many approaches rely on strong priors, such as parametric face models learned from limited 3D scan data. However, prior models restrict generalization of the true diversity in facial geometry, skin reflectance and illumination. To alleviate this problem, we present the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model. Our multi-level face model combines the advantage of 3D Morphable Models for regularization with the out-of-space generalization of a learned corrective space. We train end-to-end on in-the-wild images without dense annotations by fusing a convolutional encoder with a differentiable expert-designed renderer and a self-supervised training loss, both defined at multiple detail levels. Our approach compares favorably to the state-of-the-art in terms of reconstruction quality, better generalizes to real world faces, and runs at over 250 Hz.Comment: CVPR 2018 (Oral). Project webpage: https://gvv.mpi-inf.mpg.de/projects/FML

    Motion correction of PET/CT images

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)The advances in health care technology help physicians make more accurate diagnoses about the health conditions of their patients. Positron Emission Tomography/Computed Tomography (PET/CT) is one of the many tools currently used to diagnose health and disease in patients. PET/CT explorations are typically used to detect: cancer, heart diseases, disorders in the central nervous system. Since PET/CT studies can take up to 60 minutes or more, it is impossible for patients to remain motionless throughout the scanning process. This movements create motion-related artifacts which alter the quantitative and qualitative results produced by the scanning process. The patient's motion results in image blurring, reduction in the image signal to noise ratio, and reduced image contrast, which could lead to misdiagnoses. In the literature, software and hardware-based techniques have been studied to implement motion correction over medical files. Techniques based on the use of an external motion tracking system are preferred by researchers because they present a better accuracy. This thesis proposes a motion correction system that uses 3D affine registrations using particle swarm optimization and an off-the-shelf Microsoft Kinect camera to eliminate or reduce errors caused by the patient's motion during a medical imaging study
    • …
    corecore