11,638 research outputs found

    Variational models for joint subsampling and reconstruction of turbulence-degraded images

    Full text link
    Turbulence-degraded image frames are distorted by both turbulent deformations and space-time-varying blurs. To suppress these effects, we propose a multi-frame reconstruction scheme to recover a latent image from the observed image sequence. Recent approaches are commonly based on registering each frame to a reference image, by which geometric turbulent deformations can be estimated and a sharp image can be restored. A major challenge is that a fine reference image is usually unavailable, as every turbulence-degraded frame is distorted. A high-quality reference image is crucial for the accurate estimation of geometric deformations and fusion of frames. Besides, it is unlikely that all frames from the image sequence are useful, and thus frame selection is necessary and highly beneficial. In this work, we propose a variational model for joint subsampling of frames and extraction of a clear image. A fine image and a suitable choice of subsample are simultaneously obtained by iteratively reducing an energy functional. The energy consists of a fidelity term measuring the discrepancy between the extracted image and the subsampled frames, as well as regularization terms on the extracted image and the subsample. Different choices of fidelity and regularization terms are explored. By carefully selecting suitable frames and extracting the image, the quality of the reconstructed image can be significantly improved. Extensive experiments have been carried out, which demonstrate the efficacy of our proposed model. In addition, the extracted subsamples and images can be put in existing algorithms to produce improved results.Comment: arXiv admin note: text overlap with arXiv:1704.0314

    Restoration of Atmospheric Turbulence-distorted Images via RPCA and Quasiconformal Maps

    Full text link
    We address the problem of restoring a high-quality image from an observed image sequence strongly distorted by atmospheric turbulence. A novel algorithm is proposed in this paper to reduce geometric distortion as well as space-and-time-varying blur due to strong turbulence. By considering a suitable energy functional, our algorithm first obtains a sharp reference image and a subsampled image sequence containing sharp and mildly distorted image frames with respect to the reference image. The subsampled image sequence is then stabilized by applying the Robust Principal Component Analysis (RPCA) on the deformation fields between image frames and warping the image frames by a quasiconformal map associated with the low-rank part of the deformation matrix. After image frames are registered to the reference image, the low-rank part of them are deblurred via a blind deconvolution, and the deblurred frames are then fused with the enhanced sparse part. Experiments have been carried out on both synthetic and real turbulence-distorted video. Results demonstrate that our method is effective in alleviating distortions and blur, restoring image details and enhancing visual quality.Comment: 21 pages, 24 figure

    Motion Corrected Multishot MRI Reconstruction Using Generative Networks with Sensitivity Encoding

    Full text link
    Multishot Magnetic Resonance Imaging (MRI) is a promising imaging modality that can produce a high-resolution image with relatively less data acquisition time. The downside of multishot MRI is that it is very sensitive to subject motion and even small amounts of motion during the scan can produce artifacts in the final MR image that may cause misdiagnosis. Numerous efforts have been made to address this issue; however, all of these proposals are limited in terms of how much motion they can correct and the required computational time. In this paper, we propose a novel generative networks based conjugate gradient SENSE (CG-SENSE) reconstruction framework for motion correction in multishot MRI. The proposed framework first employs CG-SENSE reconstruction to produce the motion-corrupted image and then a generative adversarial network (GAN) is used to correct the motion artifacts. The proposed method has been rigorously evaluated on synthetically corrupted data on varying degrees of motion, numbers of shots, and encoding trajectories. Our analyses (both quantitative as well as qualitative/visual analysis) establishes that the proposed method significantly robust and outperforms state-of-the-art motion correction techniques and also reduces severalfold of computational times.Comment: This paper has been published in Scientific Reports Journa

    CNN based dense underwater 3D scene reconstruction by transfer learning using bubble database

    Full text link
    Dense 3D shape acquisition of swimming human or live fish is an important research topic for sports, biological science and so on. For this purpose, active stereo sensor is usually used in the air, however it cannot be applied to the underwater environment because of refraction, strong light attenuation and severe interference of bubbles. Passive stereo is a simple solution for capturing dynamic scenes at underwater environment, however the shape with textureless surfaces or irregular reflections cannot be recovered. Recently, the stereo camera pair with a pattern projector for adding artificial textures on the objects is proposed. However, to use the system for underwater environment, several problems should be compensated, i.e., disturbance by fluctuation and bubbles. Simple solution is to use convolutional neural network for stereo to cancel the effects of bubbles and/or water fluctuation. Since it is not easy to train CNN with small size of database with large variation, we develop a special bubble generation device to efficiently create real bubble database of multiple size and density. In addition, we propose a transfer learning technique for multi-scale CNN to effectively remove bubbles and projected-patterns on the object. Further, we develop a real system and actually captured live swimming human, which has not been done before. Experiments are conducted to show the effectiveness of our method compared with the state of the art techniques.Comment: IEEE Winter Conference on Applications of Computer Vision. arXiv admin note: text overlap with arXiv:1808.0834

    A New Adaptive Video Super-Resolution Algorithm With Improved Robustness to Innovations

    Full text link
    In this paper, a new video super-resolution reconstruction (SRR) method with improved robustness to outliers is proposed. Although the R-LMS is one of the SRR algorithms with the best reconstruction quality for its computational cost, and is naturally robust to registration inaccuracies, its performance is known to degrade severely in the presence of innovation outliers. By studying the proximal point cost function representation of the R-LMS iterative equation, a better understanding of its performance under different situations is attained. Using statistical properties of typical innovation outliers, a new cost function is then proposed and two new algorithms are derived, which present improved robustness to outliers while maintaining computational costs comparable to that of R-LMS. Monte Carlo simulation results illustrate that the proposed method outperforms the traditional and regularized versions of LMS, and is competitive with state-of-the-art SRR methods at a much smaller computational cost

    Optimized Pre-Compensating Compression

    Full text link
    In imaging systems, following acquisition, an image/video is transmitted or stored and eventually presented to human observers using different and often imperfect display devices. While the resulting quality of the output image may severely be affected by the display, this degradation is usually ignored in the preceding compression. In this paper we model the sub-optimality of the display device as a known degradation operator applied on the decompressed image/video. We assume the use of a standard compression path, and augment it with a suitable pre-processing procedure, providing a compressed signal intended to compensate the degradation without any post-filtering. Our approach originates from an intricate rate-distortion problem, optimizing the modifications to the input image/video for reaching best end-to-end performance. We address this seemingly computationally intractable problem using the alternating direction method of multipliers (ADMM) approach, leading to a procedure in which a standard compression technique is iteratively applied. We demonstrate the proposed method for adjusting HEVC image/video compression to compensate post-decompression visual effects due to a common type of displays. Particularly, we use our method to reduce motion-blur perceived while viewing video on LCD devices. The experiments establish our method as a leading approach for preprocessing high bit-rate compression to counterbalance a post-decompression degradation

    Distortion-driven Turbulence Effect Removal using Variational Model

    Full text link
    It remains a challenge to simultaneously remove geometric distortion and space-time-varying blur in frames captured through a turbulent atmospheric medium. To solve, or at least reduce these effects, we propose a new scheme to recover a latent image from observed frames by integrating a new variational model and distortion-driven spatial-temporal kernel regression. The proposed scheme first constructs a high-quality reference image from the observed frames using low-rank decomposition. Then, to generate an improved registered sequence, the reference image is iteratively optimized using a variational model containing a new spatial-temporal regularization. The proposed fast algorithm efficiently solves this model without the use of partial differential equations (PDEs). Next, to reduce blur variation, distortion-driven spatial-temporal kernel regression is carried out to fuse the registered sequence into one image by introducing the concept of the near-stationary patch. Applying a blind deconvolution algorithm to the fused image produces the final output. Extensive experimental testing shows, both qualitatively and quantitatively, that the proposed method can effectively alleviate distortion and blur and recover details of the original scene compared to state-of-the-art methods.Comment: 28 pages, 15 figure

    3D Surface Reconstruction of Underwater Objects

    Full text link
    In this paper, we propose a novel technique to reconstruct 3D surface of an underwater object using stereo images. Reconstructing the 3D surface of an underwater object is really a challenging task due to degraded quality of underwater images. There are various reason of quality degradation of underwater images i.e., non-uniform illumination of light on the surface of objects, scattering and absorption effects. Floating particles present in underwater produces Gaussian noise on the captured underwater images which degrades the quality of images. The degraded underwater images are preprocessed by applying homomorphic, wavelet denoising and anisotropic filtering sequentially. The uncalibrated rectification technique is applied to preprocessed images to rectify the left and right images. The rectified left and right image lies on a common plane. To find the correspondence points in a left and right images, we have applied dense stereo matching technique i.e., graph cut method. Finally, we estimate the depth of images using triangulation technique. The experimental result shows that the proposed method reconstruct 3D surface of underwater objects accurately using captured underwater stereo images.Comment: International Journal of Computer Applications (2012

    Bringing Alive Blurred Moments

    Full text link
    We present a solution for the goal of extracting a video from a single motion blurred image to sequentially reconstruct the clear views of a scene as beheld by the camera during the time of exposure. We first learn motion representation from sharp videos in an unsupervised manner through training of a convolutional recurrent video autoencoder network that performs a surrogate task of video reconstruction. Once trained, it is employed for guided training of a motion encoder for blurred images. This network extracts embedded motion information from the blurred image to generate a sharp video in conjunction with the trained recurrent video decoder. As an intermediate step, we also design an efficient architecture that enables real-time single image deblurring and outperforms competing methods across all factors: accuracy, speed, and compactness. Experiments on real scenes and standard datasets demonstrate the superiority of our framework over the state-of-the-art and its ability to generate a plausible sequence of temporally consistent sharp frames.Comment: CVPR 201

    MARLow: A Joint Multiplanar Autoregressive and Low-Rank Approach for Image Completion

    Full text link
    In this paper, we propose a novel multiplanar autoregressive (AR) model to exploit the correlation in cross-dimensional planes of a similar patch group collected in an image, which has long been neglected by previous AR models. On that basis, we then present a joint multiplanar AR and low-rank based approach (MARLow) for image completion from random sampling, which exploits the nonlocal self-similarity within natural images more effectively. Specifically, the multiplanar AR model constraints the local stationarity in different cross-sections of the patch group, while the low-rank minimization captures the intrinsic coherence of nonlocal patches. The proposed approach can be readily extended to multichannel images (e.g. color images), by simultaneously considering the correlation in different channels. Experimental results demonstrate that the proposed approach significantly outperforms state-of-the-art methods, even if the pixel missing rate is as high as 90%.Comment: 16 pages, 9 figure
    • …
    corecore