49 research outputs found

    Dynamic Texture Map Based Artifact Reduction For Compressed Videos

    Get PDF
    This paper proposes a method of artifact reduction in compressed videos using dynamic texture map together with artifact maps and 3D - fuzzy filters. To preserve better details during filtering process, the authors introduce a novel method to construct a texture map for video sequences called dynamic texture map. Then, temporal arifacts such as flicker artifacts and mosquito artifacts are also estimated by advanced flicker maps and mosquito maps. These maps combined with fuzzy filters are applied to intraframe and interframe pixels to enhancecompressed videos. Simulation results verify the advanced performance of the proposed fuzzy filtering scheme in term of visual quality, SSIM, PSNR and flicker metrics in comparisionwith existing state of the art methods

    PEA265: Perceptual Assessment of Video Compression Artifacts

    Full text link
    The most widely used video encoders share a common hybrid coding framework that includes block-based motion estimation/compensation and block-based transform coding. Despite their high coding efficiency, the encoded videos often exhibit visually annoying artifacts, denoted as Perceivable Encoding Artifacts (PEAs), which significantly degrade the visual Qualityof- Experience (QoE) of end users. To monitor and improve visual QoE, it is crucial to develop subjective and objective measures that can identify and quantify various types of PEAs. In this work, we make the first attempt to build a large-scale subjectlabelled database composed of H.265/HEVC compressed videos containing various PEAs. The database, namely the PEA265 database, includes 4 types of spatial PEAs (i.e. blurring, blocking, ringing and color bleeding) and 2 types of temporal PEAs (i.e. flickering and floating). Each containing at least 60,000 image or video patches with positive and negative labels. To objectively identify these PEAs, we train Convolutional Neural Networks (CNNs) using the PEA265 database. It appears that state-of-theart ResNeXt is capable of identifying each type of PEAs with high accuracy. Furthermore, we define PEA pattern and PEA intensity measures to quantify PEA levels of compressed video sequence. We believe that the PEA265 database and our findings will benefit the future development of video quality assessment methods and perceptually motivated video encoders.Comment: 10 pages,15 figures,4 table

    Adaptive filtering techniques for acquisition noise and coding artifacts of digital pictures

    Get PDF
    The quality of digital pictures is often degraded by various processes (e.g, acquisition or capturing, compression, filtering process, transmission, etc). In digital image/video processing systems, random noise appearing in images is mainly generated during the capturing process; while the artifacts (or distortions) are generated in compression or filtering processes. This dissertation looks at digital image/video quality degradations with possible solution for post processing techniques for coding artifacts and acquisition noise reduction for images/videos. Three major issues associated with the image/video degradation are addressed in this work. The first issue is the temporal fluctuation artifact in digitally compressed videos. In the state-of-art video coding standard, H.264/AVC, temporal fluctuations are noticeable between intra picture frames or between an intra picture frame and neighbouring inter picture frames. To resolve this problem, a novel robust statistical temporal filtering technique is proposed. It utilises a re-descending robust statistical model with outlier rejection feature to reduce the temporal fluctuations while preserving picture details and motion sharpness. PSNR and sum of square difference (SSD) show improvement of proposed filters over other benchmark filters. Even for videos contain high motion, the proposed temporal filter shows good performances in fluctuation reduction and motion clarity preservation compared with other baseline temporal filters. The second issue concerns both the spatial and temporal artifacts (e.g, blocking, ringing, and temporal fluctuation artifacts) appearing in compressed video. To address this issue, a novel joint spatial and temporal filtering framework is constructed for artifacts reduction. Both the spatial and the temporal filters employ a re-descending robust statistical model (RRSM) in the filtering processes. The robust statistical spatial filter (RSSF) reduces spatial blocking and ringing artifacts whilst the robust statistical temporal filter (RSTF) suppresses the temporal fluctuations. Performance evaluations demonstrate that the proposed joint spatio-temporal filter is superior to H.264 loop filter in terms of spatial and temporal artifacts reduction and motion clarity preservation. The third issue is random noise, commonly modeled as mixed Gaussian and impulse noise (MGIN), which appears in image/video acquisition process. An effective method to estimate MGIN is through a robust estimator, median absolute deviation normalized (MADN). The MADN estimator is used to separate the MGIN model into impulse and additive Gaussian noise portion. Based on this estimation, the proposed filtering process is composed of a modified median filter for impulse noise reduction, and a DCT transform based denoising filter for additive Gaussian noise reduction. However, this DCT based denoising filter produces temporal fluctuations for videos. To solve this problem, a temporal filter is added to the filtering process. Therefore, another joint spatio-temporal filtering scheme is built to achieve the best visual quality of denoised videos. Extensive experiments show that the proposed joint spatio-temporal filtering scheme outperforms other benchmark filters in noise and distortions suppression

    Bilateral filter in image processing

    Get PDF
    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges. It has shown to be an effective image denoising technique. It also can be applied to the blocking artifacts reduction. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. Another research interest of bilateral filter is acceleration of the computation speed. There are three main contributions of this thesis. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising. I propose an extension of the bilateral filter: multi resolution bilateral filter, where bilateral filtering is applied to the low-frequency sub-bands of a signal decomposed using a wavelet filter bank. The multi resolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. The second contribution is that I present a spatially adaptive method to reduce compression artifacts. To avoid over-smoothing texture regions and to effectively eliminate blocking and ringing artifacts, in this paper, texture regions and block boundary discontinuities are first detected; these are then used to control/adapt the spatial and intensity parameters of the bilateral filter. The test results prove that the adaptive method can improve the quality of restored images significantly better than the standard bilateral filter. The third contribution is the improvement of the fast bilateral filter, in which I use a combination of multi windows to approximate the Gaussian filter more precisely

    Video Quality Metrics

    Get PDF

    No-reference image and video quality assessment: a classification and review of recent approaches

    Get PDF

    On the performance of video quality assessment methods for different spatial and temporal resolutions

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2017.O consumo de vídeos digitais cresce a cada ano. Vários países já utilizam TV digital e o tráfego de dados de vídeos na internet equivale a mais de 60\% de todo o tráfego de dados na internet. Esse aumento no consumo de vídeos digitais exige métodos computacionais viáveis para o cálculo da qualidade do vídeo. Métodos objetivos de qualidade de vídeo são algoritmos que calculam a qualidade do vídeo. As mais recentes métricas de qualidade de vídeo, apesar de adequadas possuem um tempo de execução alto. Em geral, os algoritmos utilizados são complexos e extraem características espaciais e temporais dos vídeos. Neste trabalho, realizamos uma análise dos efeitos da redução da resolução espacial no desempenho dos métodos de avaliação da qualidade do vídeo. Com base nesta análise, nós propomos um framework, para a avaliação da qualidade de vídeo que melhora o tempo de execução das métricas objetivas de qualidade de vídeo sem reduzir o desempenho na predição da qualidade do vídeo. O framework consiste em quatro etapas. A primeira etapa, classificação, identifica os vídeos mais sensíveis à redução da resolução espacial. A segunda etapa, redução, reduz a resolução espacial do vídeo de acordo com a distorção presente. A terceira etapa, predição de qualidade, utiliza uma métrica objetiva para obter uma estimativa da qualidade do vídeo. Finalmente, a quarta etapa realiza um ajuste dos índices de qualidade preditos. Dois classificadores de vídeo são propostos para a etapa de classificação do framework. O primeiro é um classificador com referência, que realiza medidas da atividade espacial dos vídeos. O segundo é um classificador sem-referência, que realiza medidas de entropia espacial e espectral, utilizando Support Vector Machine, para classificar os vídeos. Os classificadores de vídeo têm o objetivo de selecionar o melhor fator de redução da resolução espacial do vídeo. Testamos o framework proposto com 6 métricas objetivas de qualidade de vídeo e 4 bancos de qualidade de vídeo. Com isso, melhoramos o tempo de execução de todas as métricas de qualidade de vídeo testadas.The consumption of digital videos increases every year. In addition to the fact that many countries already use digital TV, currently the traffic of internet video services are more than 60\% of the total internet traffic. The growth of digital video consumption demands a viable method to measure the video quality. Objective video quality assessment methods are algorithms that estimates video quality. Recent quality assessment methods provide quality predictions that are well correlated with the subjective quality scores. However, most of these methods are very complex and takes long periods to compute. In this work, we analyze the effects of reducing the video spatial resolution on the performance of video quality assessment methods. Based on this analysis, we propose a framework for video quality assessment that reduces the runtime performance of a given video quality assessment method without reducing its accuracy performance. The proposed framework is composed of four stages. The first stage, classification, identifies videos that are more sensitive to spatial resolution reduction. The second stage, reduction, aims to reduce the video spatial resolution according to the video distortion. The third stage, quality prediction, estimates the video quality using an objective video quality assessment method. Finally, the fourth stage normalizes the predicted quality scores according to the video spatial resolution. We design two video classifiers for the first stage of the framework. The first classifier is a full-reference classifier based on a video spatial activity measure. The second is a no-reference classifier based on spatial and spectral entropy measures, which uses a Support Vector Machine (SVM) algorithm. We use the video classifiers to identify the type of distortion in the video and choose the most appropriate spatial resolution. We test the framework using six different video quality assessment methods and four different video quality databases. Results show that the proposed framework improves the average runtime performance of all video quality assessment methods tested. We also analyze the effects of a temporal resolution reduction on the performance of video quality assessment methods. The analysis shows that video quality assessment methods based on temporal features are more sensitive to temporal resolution reduction. Also, videos with temporal distortions, like packet loss, are very sensitive to temporal resolution reduction
    corecore