945 research outputs found

    Depth Superresolution using Motion Adaptive Regularization

    Full text link
    Spatial resolution of depth sensors is often significantly lower compared to that of conventional optical cameras. Recent work has explored the idea of improving the resolution of depth using higher resolution intensity as a side information. In this paper, we demonstrate that further incorporating temporal information in videos can significantly improve the results. In particular, we propose a novel approach that improves depth resolution, exploiting the space-time redundancy in the depth and intensity using motion-adaptive low-rank regularization. Experiments confirm that the proposed approach substantially improves the quality of the estimated high-resolution depth. Our approach can be a first component in systems using vision techniques that rely on high resolution depth information

    Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network

    Get PDF
    Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods

    Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network

    Get PDF
    Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods

    An Improved Observation Model for Super-Resolution under Affine Motion

    Full text link
    Super-resolution (SR) techniques make use of subpixel shifts between frames in an image sequence to yield higher-resolution images. We propose an original observation model devoted to the case of non isometric inter-frame motion as required, for instance, in the context of airborne imaging sensors. First, we describe how the main observation models used in the SR literature deal with motion, and we explain why they are not suited for non isometric motion. Then, we propose an extension of the observation model by Elad and Feuer adapted to affine motion. This model is based on a decomposition of affine transforms into successive shear transforms, each one efficiently implemented by row-by-row or column-by-column 1-D affine transforms. We demonstrate on synthetic and real sequences that our observation model incorporated in a SR reconstruction technique leads to better results in the case of variable scale motions and it provides equivalent results in the case of isometric motions

    Global motion based video super-resolution reconstruction using discrete wavelet transform

    Get PDF
    Different from the existing super-resolution (SR) reconstruction approaches working under either the frequency-domain or the spatial- domain, this paper proposes an improved video SR approach based on both frequency and spatial-domains to improve the spatial resolution and recover the noiseless high-frequency components of the observed noisy low-resolution video sequences with global motion. An iterative planar motion estimation algorithm followed by a structure-adaptive normalised convolution reconstruction method are applied to produce the estimated low-frequency sub-band. The discrete wavelet transform process is employed to decompose the input low-resolution reference frame into four sub-bands, and then the new edge-directed interpolation method is used to interpolate each of the high-frequency sub-bands. The novelty of this algorithm is the introduction and integration of a nonlinear soft thresholding process to filter the estimated high-frequency sub-bands in order to better preserve the edges and remove potential noise. Another novelty of this algorithm is to provide flexibility with various motion levels, noise levels, wavelet functions, and the number of used low-resolution frames. The performance of the proposed method has been tested on three well-known videos. Both visual and quantitative results demonstrate the high performance and improved flexibility of the proposed technique over the conventional interpolation and the state-of-the-art video SR techniques in the wavelet- domain

    Towards quantitative high-throughput 3D localization microscopy

    Get PDF
    Advances in light microscopy have allowed circumventing the diffraction barrier, once thought to be the ultimate resolution limit in optical microscopy, and given rise to various superresolution microscopy techniques. Among them, localization microscopy exploits the blinking of fluorescent molecules to precisely pinpoint the positions of many emitters individually, and subsequently reconstruct a superresolved image from these positions. While localization microscopy enables the study of cellular structures and protein complexes with unprecedented details, severe technical bottlenecks still reduce the scope of possible applications. In my PhD work, I developed several technical improvements at the level of the microscope to overcome limitations related to the photophysical behaviour of fluorescent molecules, slow acquisition rates and three-dimensional imaging. I built an illumination system that achieves uniform intensity across the field-of view using a multi-mode fiber and a commercial speckle-reducer. I showed that it provides uniform photophysics within the illuminated area and is far superior to the common illumination system. It is easy to build and to add to any microscope, and thus greatly facilitates quantitative approaches in localization microscopy. Furthermore, I developed a fully automated superresolution microscope using an open-source software framework. I developed advanced electronics and user friendly software solutions to enable the design and unsupervised acquisition of complex experimental series. Optimized for long-term stability, the automated microscope is able to image hundreds to thousands of regions over the course of days to weeks. First applied in a system-wide study of clathrin-mediated endocytosis in yeast, the automated microscope allowed the collection of a data set of a size and scope unprecedented in localization microscopy. Finally, I established a fundamentally new approach to obtain three-dimensional superresolution images. Supercritical angle localization microscopy (SALM) exploits the phenomenon of surface-generated fluorescence arising from fluorophores close to the coverslip. SALM has the theoretical prospect of an isotropic spatial resolution with simple instrumentation. Following a first proof-of-concept implementation, I re-engineered the microscope to include adaptive optics in order to reach the full potential of the method. Taken together, I established simple, yet powerful, solutions for three fundamental technical limitations in localization microscopy regarding illumination, throughput and resolution. All of them can be combined within the same instrument, and can dramatically improve every cutting-edge microscope. This will help to push the limit of the most challenging applications of localization microscopy, including system-wide imaging experiments and structural studies
    • …
    corecore