8 research outputs found
Tracking-Optimized Quantization for H.264 Compression in Transportation Video Surveillance Applications
We propose a tracking-aware system that removes video components of low tracking interest and optimizes the quantization during compression of frequency coefficients, particularly those that most influence trackers, significantly reducing bitrate while maintaining comparable tracking accuracy. We utilize tracking accuracy as our compression criterion in lieu of mean squared error metrics. The process of optimizing quantization tables suitable for automated tracking can be executed online or offline. The online implementation initializes the encoding procedure for a specific scene, but introduces delay. On the other hand, the offline procedure produces globally optimum quantization tables where the optimization occurs for a collection of video sequences. Our proposed system is designed with low processing power and memory requirements in mind, and as such can be deployed on remote nodes. Using H.264/AVC video coding and a commonly used state-of-the-art tracker we show that while maintaining comparable tracking accuracy our system allows for over 50% bitrate savings on top of existing savings from previous work
Probabilistic modeling of wavelet coefficients for processing of image and video signals
Statistical estimation and detection techniques are widely used in signal processing including wavelet-based image and video processing. The probability density function (PDF) of the wavelet coefficients of image and video signals plays a key role in the development of techniques for such a processing. Due to the fixed number of parameters, the conventional PDFs for the estimators and detectors usually ignore higher-order moments. Consequently, estimators and detectors designed using such PDFs do not provide a satisfactory performance. This thesis is concerned with first developing a probabilistic model that is capable of incorporating an appropriate number of parameters that depend on higher-order moments of the wavelet coefficients. This model is then used as the prior to propose certain estimation and detection techniques for denoising and watermarking of image and video signals. Towards developing the probabilistic model, the Gauss-Hermite series expansion is chosen, since the wavelet coefficients have non-compact support and their empirical density function shows a resemblance to the standard Gaussian function. A modification is introduced in the series expansion so that only a finite number of terms can be used for modeling the wavelet coefficients with rendering the resulting PDF to become negative. The parameters of the resulting PDF, called the modified Gauss-Hermite (NIGH) PDF, are evaluated in terms of the higher-order sample-moments. It is shown that the MGH PDF fits the empirical density function better than the existing PDFs that use a limited number of parameters do. The proposed MGH PDF is used as the prior of image and video signals in designing maximum a posteriori and minimum mean squared error-based estimators for denoising of image and video signals and log-likelihood ratio-based detector for watermarking of image signals. The performance of the estimation and detection techniques are then evaluated in terms of the commonly used metrics. It is shown through extensive experimentations that the estimation and detection techniques developed utilizing the proposed MGH PDF perform substantially better than those that utilize the conventional PDFs. These results confirm that the superior fit of the MGH PDF to the empirical density function resulting from the flexibility of the MGH PDF in choosing the number of parameters, which are functions of higher-order moments of data, leads to the better performance. Thus, the proposed MGH PDF should play a significant role in wavelet-based image and video signal processin
Estudio e implementación en GPU de un algoritmo de eliminación de ruido en imagen médica
Parcero Iglesias, E. (2015). Estudio e implementación en GPU de un algoritmo de eliminación de ruido en imagen médica. http://hdl.handle.net/10251/6926
Integration of Recursive Temporal LMMSE Denoising Filter Into Video Codec
The presence of noise can dramatically affect the efficiency of video compression systems. For performance improvement, most practical video compression systems adopt a denoising filter as a pre-processing module for the video encoder, or as a post-processing module for the video decoder, but the complexity introduced by denoising can be very high. This paper first presents a recursive temporal linear minimum mean squared error (LMMSE) filter for video denoising. Based on the analysis of the hybrid video compression process, two novel schemes are presented, one for video encoding and the other for video decoding, in which the proposed recursive temporal LMMSE filter is seamlessly integrated into the encoding and the decoding processes, respectively. For both of these two schemes, the denoising is implemented with nearly no extra computation introduced. Experimental results validate the effectiveness of the proposed schemes on encoding and decoding noisy video sequences
Algoritmos de detección y filtrado de imágenes para arquitecturas multicore y manycore
En esta tesis se aborda la eliminaci'on de ruido impulsivo, gaussiano y
speckle en im'agenes a color y en escala de gises. Como caso particular
se puede mencionar la eliminaci'on de ruido en im'agenes m'edicas.
Algunos m'etodos de filtrado son costosos computacionalmente y m'as
a'un, si las im'agenes son de gran tama¿no. Con el fin de reducir el coste
computacional de dichos m'etodos, en esta tesis se utiliza hardware que
soporta procesamiento paralelo, como lo son los cores CPU con procesadores
multicore y GPUs con procesadores manycore.En las implementaciones
paralelas en CUDA, se configuran algunas caracter'¿sticas
con la finalidad de optimizar el procesamiento de la aplicaci'on en las
GPUs.
Esta tesis estudia por un lado, el rendimiento computacional obtenido
en el proceso de eliminaci'on de ruido impulsivo y uniforme. Por otro
lado, se eval'ua la calidad despu'es de realizar el proceso de filtrado.
El rendimiento computacional se ha obtenido con la paralelizaci'on de
los algoritmos en CPU y/o GPU. Para obtener buena calidad en la
imagen filtrada, primero se detectan los p'¿xeles corruptos y luego son
filtrados solo los p'¿xeles que se han detectado como corruptos. Por lo
que respecta a la eliminaci'on de ruido gaussiano y speckle, el an'alisis
del filtro difusivo no lineal ha demostrado ser eficaz para este caso.
Los algoritmos que se utilizan para eliminar el ruido impulsivo y uniforme
en las im'agenes, y sus implementaciones secuenciales y paralelas
se han evaluado experimentalmente en tiempo de ejecuci'on (speedup)
y eficiencia en tres equipos de c'omputo de altas prestaciones. Los resultados
han mostrado que las implementaciones paralelas disminuyen
considerablemente los tiempos de ejecuci'on secuenciales.
Finalmente, en esta tesis se propone un m'etodo para reducir eficientemente
el ruido en las im'agenes sin tener informaci'on inicial del tipo
de ruido contenido en ellas.
ISánchez Cervantes, MG. (2013). Algoritmos de detección y filtrado de imágenes para arquitecturas multicore y manycore [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/28854TESI