7 research outputs found

    An overview of video super-resolution algorithms

    Get PDF
    We investigate some excellent algorithms in the field of video space super-resolution based on artificial intelligence, structurally analyze the network structure of the algorithm and the commonly used loss functions. We also analyze the characteristics of algorithms in the new field of video space-time super-resolution. This work helps researchers to deeply understand the video super-resolution technology based on artificial intelligence

    How Video Super-Resolution and Frame Interpolation Mutually Benefit

    Get PDF
    Video super-resolution (VSR) and video frame interpolation (VFI) are inter-dependent for enhancing videos of low resolution and low frame rate. However, most studies treat VSR and temporal VFI as independent tasks. In this work, we design a spatial-temporal super-resolution network based on exploring the interaction between VSR and VFI. The main idea is to improve the middle frame of VFI by the super-resolution (SR) frames and feature maps from VSR. In the meantime, VFI also provides extra information for VSR and thus, through interacting, the SR of consecutive frames of the original video can also be improved by the feedback from the generated middle frame. Drawing on this, our approach leverages a simple interaction of VSR and VFI and achieves state-of-the-art performance on various datasets. Due to such a simple strategy, our approach is universally applicable to any existing VSR or VFI networks for effectively improving their video enhancement performance

    Enhancing Space-time Video Super-resolution via Spatial-temporal Feature Interaction

    Full text link
    The target of space-time video super-resolution (STVSR) is to increase both the frame rate (also referred to as the temporal resolution) and the spatial resolution of a given video. Recent approaches solve STVSR with end-to-end deep neural networks. A popular solution is to first increase the frame rate of the video; then perform feature refinement among different frame features; and last increase the spatial resolutions of these features. The temporal correlation among features of different frames is carefully exploited in this process. The spatial correlation among features of different (spatial) resolutions, despite being also very important, is however not emphasized. In this paper, we propose a spatial-temporal feature interaction network to enhance STVSR by exploiting both spatial and temporal correlations among features of different frames and spatial resolutions. Specifically, the spatial-temporal frame interpolation module is introduced to interpolate low- and high-resolution intermediate frame features simultaneously and interactively. The spatial-temporal local and global refinement modules are respectively deployed afterwards to exploit the spatial-temporal correlation among different features for their refinement. Finally, a novel motion consistency loss is employed to enhance the motion continuity among reconstructed frames. We conduct experiments on three standard benchmarks, Vid4, Vimeo-90K and Adobe240, and the results demonstrate that our method improves the state of the art methods by a considerable margin. Our codes will be available at https://github.com/yuezijie/STINet-Space-time-Video-Super-resolution

    Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging

    Get PDF
    The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI’s performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform
    corecore