3,606 research outputs found

    Coding artifacts robust resolution up-conversion

    Get PDF
    In this paper, an integrated resolution up-conversion and compression artifacts removal algorithm is proposed. Local image patterns are classified into object details or coding artifacts using the combination of structure information and activity measure. For each pattern class, the weighting coefficients for up-scaling and artifact reduction are optimized by a Least Mean Square (LMS) training technique, which trains on the combination of the original images and the compressed down-sampled versions of the original images. The proposed combined algorithm is proven to be more effective than previous classification based techniques in concatenation. Index Terms — Image up-scaling, Compression artifact

    Video enhancement : content classification and model selection

    Get PDF
    The purpose of video enhancement is to improve the subjective picture quality. The field of video enhancement includes a broad category of research topics, such as removing noise in the video, highlighting some specified features and improving the appearance or visibility of the video content. The common difficulty in this field is how to make images or videos more beautiful, or subjectively better. Traditional approaches involve lots of iterations between subjective assessment experiments and redesigns of algorithm improvements, which are very time consuming. Researchers have attempted to design a video quality metric to replace the subjective assessment, but so far it is not successful. As a way to avoid heuristics in the enhancement algorithm design, least mean square methods have received considerable attention. They can optimize filter coefficients automatically by minimizing the difference between processed videos and desired versions through a training. However, these methods are only optimal on average but not locally. To solve the problem, one can apply the least mean square optimization for individual categories that are classified by local image content. The most interesting example is Kondo’s concept of local content adaptivity for image interpolation, which we found could be generalized into an ideal framework for content adaptive video processing. We identify two parts in the concept, content classification and adaptive processing. By exploring new classifiers for the content classification and new models for the adaptive processing, we have generalized a framework for more enhancement applications. For the part of content classification, new classifiers have been proposed to classify different image degradations such as coding artifacts and focal blur. For the coding artifact, a novel classifier has been proposed based on the combination of local structure and contrast, which does not require coding block grid detection. For the focal blur, we have proposed a novel local blur estimation method based on edges, which does not require edge orientation detection and shows more robust blur estimation. With these classifiers, the proposed framework has been extended to coding artifact robust enhancement and blur dependant enhancement. With the content adaptivity to more image features, the number of content classes can increase significantly. We show that it is possible to reduce the number of classes without sacrificing much performance. For the part of model selection, we have introduced several nonlinear filters to the proposed framework. We have also proposed a new type of nonlinear filter, trained bilateral filter, which combines both advantages of the original bilateral filter and the least mean square optimization. With these nonlinear filters, the proposed framework show better performance than with linear filters. Furthermore, we have shown a proof-of-concept for a trained approach to obtain contrast enhancement by a supervised learning. The transfer curves are optimized based on the classification of global or local image content. It showed that it is possible to obtain the desired effect by learning from other computationally expensive enhancement algorithms or expert-tuned examples through the trained approach. Looking back, the thesis reveals a single versatile framework for video enhancement applications. It widens the application scope by including new content classifiers and new processing models and offers scalabilities with solutions to reduce the number of classes, which can greatly accelerate the algorithm design

    Adaptive filtering techniques for acquisition noise and coding artifacts of digital pictures

    Get PDF
    The quality of digital pictures is often degraded by various processes (e.g, acquisition or capturing, compression, filtering process, transmission, etc). In digital image/video processing systems, random noise appearing in images is mainly generated during the capturing process; while the artifacts (or distortions) are generated in compression or filtering processes. This dissertation looks at digital image/video quality degradations with possible solution for post processing techniques for coding artifacts and acquisition noise reduction for images/videos. Three major issues associated with the image/video degradation are addressed in this work. The first issue is the temporal fluctuation artifact in digitally compressed videos. In the state-of-art video coding standard, H.264/AVC, temporal fluctuations are noticeable between intra picture frames or between an intra picture frame and neighbouring inter picture frames. To resolve this problem, a novel robust statistical temporal filtering technique is proposed. It utilises a re-descending robust statistical model with outlier rejection feature to reduce the temporal fluctuations while preserving picture details and motion sharpness. PSNR and sum of square difference (SSD) show improvement of proposed filters over other benchmark filters. Even for videos contain high motion, the proposed temporal filter shows good performances in fluctuation reduction and motion clarity preservation compared with other baseline temporal filters. The second issue concerns both the spatial and temporal artifacts (e.g, blocking, ringing, and temporal fluctuation artifacts) appearing in compressed video. To address this issue, a novel joint spatial and temporal filtering framework is constructed for artifacts reduction. Both the spatial and the temporal filters employ a re-descending robust statistical model (RRSM) in the filtering processes. The robust statistical spatial filter (RSSF) reduces spatial blocking and ringing artifacts whilst the robust statistical temporal filter (RSTF) suppresses the temporal fluctuations. Performance evaluations demonstrate that the proposed joint spatio-temporal filter is superior to H.264 loop filter in terms of spatial and temporal artifacts reduction and motion clarity preservation. The third issue is random noise, commonly modeled as mixed Gaussian and impulse noise (MGIN), which appears in image/video acquisition process. An effective method to estimate MGIN is through a robust estimator, median absolute deviation normalized (MADN). The MADN estimator is used to separate the MGIN model into impulse and additive Gaussian noise portion. Based on this estimation, the proposed filtering process is composed of a modified median filter for impulse noise reduction, and a DCT transform based denoising filter for additive Gaussian noise reduction. However, this DCT based denoising filter produces temporal fluctuations for videos. To solve this problem, a temporal filter is added to the filtering process. Therefore, another joint spatio-temporal filtering scheme is built to achieve the best visual quality of denoised videos. Extensive experiments show that the proposed joint spatio-temporal filtering scheme outperforms other benchmark filters in noise and distortions suppression

    Image representation and compression using steered hermite transforms

    Get PDF
    • …
    corecore