1,084 research outputs found
On the Application of Dictionary Learning to Image Compression
Signal models are a cornerstone of contemporary signal and image-processing methodology. In this chapter, a particular signal modelling method, called synthesis sparse representation, is studied which has been proven to be effective for many signals, such as natural images, and successfully used in a wide range of applications. In this kind of signal modelling, the signal is represented with respect to dictionary. The dictionary choice plays an important role on the success of the entire model. One main discipline of dictionary designing is based on a machine learning methodology which provides a simple and expressive structure for designing adaptable and efficient dictionaries. This chapter focuses on direct application of the sparse representation, i.e. image compression. Two image codec based on adaptive sparse representation over a trained dictionary are introduced. Experimental results show that the presented methods outperform the existing image coding standards, such as JPEG and JPEG2000
Recommended from our members
Joint sparse learning with nonlocal and local image priors for image error concealment
Joint sparse representation (JSR) model has recently emerged as a powerful technique with wide variety of applications. In this paper, the JSR model is extended to error concealment (EC) application, being effective to recover the original image from its corrupted version. This model is based on jointly learning a dictionary pair and two mapping matrices that are trained offline from external training images. Given the trained dictionaries and mappings, the restoration is done by transferring the recovery problem into the sparse representation domain with respect to the trained dictionaries, which is further transformed into a common space using the respective mapping matrices. Then, the reconstructed image is obtained by back projection into the spatial domain. In order to improve the accuracy and stability of the proposed JSR-based EC algorithm and avoid unexpected artifacts, the local and non-local priors are seamlessly integrated into the JSR model. The non-local prior is based on the self-similarity within natural images and helps to find an accurate sparse representation by taking a weighted average of similar areas throughout the image. The local prior is based on learning the local structural regularity of the natural images and helps to regularize the sparse representation, exploiting the strong correlation in the small local areas within the image. Compared with the state-of-the-art EC algorithms, the results show that the proposed method has better reconstruction performance in terms of objective and subjective evaluations
Context-Patch Face Hallucination Based on Thresholding Locality-Constrained Representation and Reproducing Learning
Face hallucination is a technique that reconstruct high-resolution (HR) faces from low-resolution (LR) faces, by using the prior knowledge learned from HR/LR face pairs. Most state-of-the-arts leverage position-patch prior knowledge of human face to estimate the optimal representation coefficients for each image patch. However, they focus only the position information and usually ignore the context information of image patch. In addition, when they are confronted with misalignment or the Small Sample Size (SSS) problem, the hallucination performance is very poor. To this end, this study incorporates the contextual information of image patch and proposes a powerful and efficient context-patch based face hallucination approach, namely Thresholding Locality-constrained Representation and Reproducing learning (TLcR-RL). Under the context-patch based framework, we advance a thresholding based representation method to enhance the reconstruction accuracy and reduce the computational complexity. To further improve the performance of the proposed algorithm, we propose a promotion strategy called reproducing learning. By adding the estimated HR face to the training set, which can simulates the case that the HR version of the input LR face is present in the training set, thus iteratively enhancing the final hallucination result. Experiments demonstrate that the proposed TLcR-RL method achieves a substantial increase in the hallucinated results, both subjectively and objectively. Additionally, the proposed framework is more robust to face misalignment and the SSS problem, and its hallucinated HR face is still very good when the LR test face is from the real-world. The MATLAB source code is available at https://github.com/junjun-jiang/TLcR-RL
Video modeling via implicit motion representations
Video modeling refers to the development of analytical representations for explaining the intensity distribution in video signals. Based on the analytical representation, we can develop algorithms for accomplishing particular video-related tasks. Therefore video modeling provides us a foundation to bridge video data and related-tasks. Although there are many video models proposed in the past decades, the rise of new applications calls for more efficient and accurate video modeling approaches.;Most existing video modeling approaches are based on explicit motion representations, where motion information is explicitly expressed by correspondence-based representations (i.e., motion velocity or displacement). Although it is conceptually simple, the limitations of those representations and the suboptimum of motion estimation techniques can degrade such video modeling approaches, especially for handling complex motion or non-ideal observation video data. In this thesis, we propose to investigate video modeling without explicit motion representation. Motion information is implicitly embedded into the spatio-temporal dependency among pixels or patches instead of being explicitly described by motion vectors.;Firstly, we propose a parametric model based on a spatio-temporal adaptive localized learning (STALL). We formulate video modeling as a linear regression problem, in which motion information is embedded within the regression coefficients. The coefficients are adaptively learned within a local space-time window based on LMMSE criterion. Incorporating a spatio-temporal resampling and a Bayesian fusion scheme, we can enhance the modeling capability of STALL on more general videos. Under the framework of STALL, we can develop video processing algorithms for a variety of applications by adjusting model parameters (i.e., the size and topology of model support and training window). We apply STALL on three video processing problems. The simulation results show that motion information can be efficiently exploited by our implicit motion representation and the resampling and fusion do help to enhance the modeling capability of STALL.;Secondly, we propose a nonparametric video modeling approach, which is not dependent on explicit motion estimation. Assuming the video sequence is composed of many overlapping space-time patches, we propose to embed motion-related information into the relationships among video patches and develop a generic sparsity-based prior for typical video sequences. First, we extend block matching to more general kNN-based patch clustering, which provides an implicit and distributed representation for motion information. We propose to enforce the sparsity constraint on a higher-dimensional data array signal, which is generated by packing the patches in the similar patch set. Then we solve the inference problem by updating the kNN array and the wanted signal iteratively. Finally, we present a Bayesian fusion approach to fuse multiple-hypothesis inferences. Simulation results in video error concealment, denoising, and deartifacting are reported to demonstrate its modeling capability.;Finally, we summarize the proposed two video modeling approaches. We also point out the perspectives of implicit motion representations in applications ranging from low to high level problems
Super-resolution reconstruction of brain magnetic resonance images via lightweight autoencoder
Magnetic Resonance Imaging (MRI) is useful to provide detailed anatomical information such as images of tissues and organs within the body that are vital for quantitative image analysis. However, typically the MR images acquired lacks adequate resolution because of the constraints such as patients’ comfort and long sampling duration. Processing the low resolution MRI may lead to an incorrect diagnosis. Therefore, there is a need for super resolution techniques to obtain high resolution MRI images. Single image super resolution (SR) is one of the popular techniques to enhance image quality. Reconstruction based SR technique is a category of single image SR that can reconstruct the low resolution MRI images to high resolution images. Inspired by the advanced deep learning based SR techniques, in this paper we propose an autoencoder based MRI image super resolution technique that performs reconstruction of the high resolution MRI images from low resolution MRI images. Experimental results on synthetic and real brain MRI images show that our autoencoder based SR technique surpasses other state-of-the-art techniques in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), Information Fidelity Criterion (IFC), and computational time
Compressive Imaging Using RIP-Compliant CMOS Imager Architecture and Landweber Reconstruction
In this paper, we present a new image sensor architecture for fast and accurate compressive sensing (CS) of natural images. Measurement matrices usually employed in CS CMOS image sensors are recursive pseudo-random binary matrices. We have proved that the restricted isometry property of these matrices is limited by a low sparsity constant. The quality of these matrices is also affected by the non-idealities of pseudo-random number generators (PRNG). To overcome these limitations, we propose a hardware-friendly pseudo-random ternary measurement matrix generated on-chip by means of class III elementary cellular automata (ECA). These ECA present a chaotic behavior that emulates random CS measurement matrices better than other PRNG. We have combined this new architecture with a block-based CS smoothed-projected Landweber reconstruction algorithm. By means of single value decomposition, we have adapted this algorithm to perform fast and precise reconstruction while operating with binary and ternary matrices. Simulations are provided to qualify the approach.Ministerio de Economía y Competitividad TEC2015-66878-C3-1-RJunta de Andalucía TIC 2338-2013Office of Naval Research (USA) N000141410355European Union H2020 76586
Sparsity Based Spatio-Temporal Video Quality Assessment
In this thesis, we present an abstract view of Image and Video quality assessment algorithms. Most of the research in the area of quality assessment is focused on the scenario where the end-user is a human observer and therefore commonly known as perceptual quality assessment. In this thesis, we discuss Full Reference Video Quality Assessment and No Reference image quality assessment
Recommended from our members
Inspection and evaluation of artifacts in digital video sources
Streaming digital video content providers such as YouTube, Amazon, Hulu, and Netflix collaborate with production teams to obtain new and old video content. These collaborations lead to an accumulation of video sources, some of which might contain unacceptable visual artifacts. Artifacts may inadvertently enter the video master at any point in the production pipeline, due to any of a number of equipment and user failures. Unfortunately, these artifacts are difficult to detect since no pristine reference exists for comparison. As of now, few automated tools exist that can effectively capture the most common forms of these artifacts. This work studies no-reference video source inspection for generalized artifact detection and subjective quality prediction, which will ultimate inform decisions related to acquisition of new content.
Automatically identifying the locations and severities of video artifacts is a difficult problem. We have developed a general method for detecting local artifacts by learning differences in the statistics between distorted and pristine video frames. Our model, which we call the Video Impairment Mapper (VID-MAP), produces a full resolution map of artifact detection probabilities based on comparisons of excitatory and inhibatory convolutional responses. Validation on a large database shows that our method outperforms the previous state-of-the-art of even distortion-specific detectors.
A variety of powerful picture quality predictors are available that rely on neuro-statistical models of distortion perception. We extend these principles to video source inspection, by coupling spatial divisive normalization with a series of filterbanks tuned for artifact detection, implemented using a common convolutional framework. We developed the Video Impairment Detection by SParse Error CapTure (VIDSPECT) model, which leverages discriminative sparse dictionaries that are tuned to detect specific artifacts. VIDSPECT is simple, highly generalizable, and yields better accuracy than competing methods.
To evaluate the perceived quality of video sources containing artifacts, we built a new digital video database, called the LIVE Video Masters Database, which contains 384 videos affected by the types of artifacts encountered in otherwise pristine digital video sources. We find that VIDSPECT delivers top performance on this database for most artifacts tested, and competitive performance otherwise, using the same basic architecture in all cases.Electrical and Computer Engineerin
- …