8 research outputs found
Joint learning for side information and correlation model based on linear regression model in distributed video coding
The coding efficiency of distributed video coding system is significantly determined by the side information quality and correlation model. Motivated by theoretical analysis of the maximum likelihood treatment for linear regression model, we propose a novel joint online learning model for side information generation and correlation model estimation in this paper. In our proposed scheme, each pixel in the side information is approximated as the linear weighted combination of samples within a local spatio-temporal neighboring space. Weights are trained in a self-feedback fashion, during which the correlation model parameters can also be achieved. The efficiency of the proposed joint learning model is confirmed experimentally. ?2009 IEEE.EI
Contributions to HEVC Prediction for Medical Image Compression
Medical imaging technology and applications are continuously evolving, dealing with images
of increasing spatial and temporal resolutions, which allow easier and more accurate
medical diagnosis. However, this increase in resolution demands a growing amount of
data to be stored and transmitted. Despite the high coding efficiency achieved by the
most recent image and video coding standards in lossy compression, they are not well
suited for quality-critical medical image compression where either near-lossless or lossless
coding is required.
In this dissertation, two different approaches to improve lossless coding of volumetric
medical images, such as Magnetic Resonance and Computed Tomography, were studied
and implemented using the latest standard High Efficiency Video Encoder (HEVC). In a
first approach, the use of geometric transformations to perform inter-slice prediction was
investigated.
For the second approach, a pixel-wise prediction technique, based on Least-Squares prediction,
that exploits inter-slice redundancy was proposed to extend the current HEVC
lossless tools. Experimental results show a bitrate reduction between 45% and 49%, when
compared with DICOM recommended encoders, and 13.7% when compared with standard
HEVC
Least-Square Prediction for Backward Adaptive Video Coding
<p/> <p>Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP), and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than <inline-formula><graphic file="1687-6180-2006-090542-i1.gif"/></inline-formula>, full-search, quarter-pel block matching algorithm (BMA) without the need of transmitting any overhead.</p
Improving minimum rate predictors algorithm for compression of volumetric medical images
Medical imaging technologies are experiencing a growth in terms of usage and image
resolution, namely in diagnostics systems that require a large set of images, like CT or
MRI. Furthermore, legal restrictions impose that these scans must be archived for several
years. These facts led to the increase of storage costs in medical image databases and
institutions. Thus, a demand for more efficient compression tools, used for archiving and
communication, is arising.
Currently, the DICOM standard, that makes recommendations for medical communications
and imaging compression, recommends lossless encoders such as JPEG, RLE,
JPEG-LS and JPEG2000. However, none of these encoders include inter-slice prediction
in their algorithms.
This dissertation presents the research work on medical image compression, using the
MRP encoder. MRP is one of the most efficient lossless image compression algorithm.
Several processing techniques are proposed to adapt the input medical images to the
encoder characteristics. Two of these techniques, namely changing the alignment of slices
for compression and a pixel-wise difference predictor, increased the compression efficiency
of MRP, by up to 27.9%.
Inter-slice prediction support was also added to MRP, using uni and bi-directional techniques.
Also, the pixel-wise difference predictor was added to the algorithm. Overall, the
compression efficiency of MRP was improved by 46.1%. Thus, these techniques allow for
compression ratio savings of 57.1%, compared to DICOM encoders, and 33.2%, compared
to HEVC RExt Random Access. This makes MRP the most efficient of the encoders
under study
OpenCL-MMP : codificação de imagens com sistemas com múltiplos núcleos
Dissertação apresentada à Escola Superior de Tecnologia e Gestão do IPL para obtenção do grau de Mestre em Engenharia Informática - Computação Móvel, com orientação do Doutor Patrício Rodrigues Domingues e coorientação do Doutor Nuno Miguel Morais Rodrigues e do Doutor Sérgio Manuel
Maciel Faria.Esta tese, no âmbito dos projetos “EPIC - Efficient Pattern-matching Image Compression on
many-core systems” - PTDC/EIA-EIA/122774/2010 e “ OPAC - Optimization of patternmatching
compression algorithms for GPU’s” - PEst-OE/EEI/LA008/2013, investiga a
adaptação e otimização de um algoritmo de compressão de imagens assente na metodologia de
correspondência de padrões para sistemas de processamento gráfico de alto desempenho, isto
é, as placas gráficas (GPU). O foco principal foi a migração do algoritmo Multiscale
Multidimensional Parser (MMP) para um sistema de múlti-núcleos utilizando CUDA e
OpenCL.
Esta dissertação foca particularmente a adaptação do algoritmo MMP ao paradigma OpenCL,
onde foi necessário proceder a um estudo do algoritmo existente e identificar as funções que
consumiam a maior percentagem de tempo de processamento e aferir a viabilidade de o
transcrever para um paradigma de processamento paralelo. A principal ação detetada para
efeitos de paralelização prende-se com as pesquisas que o algoritmo efetua sob um dicionário
adaptativo na busca do melhor elemento que identifica determinado bloco. Esta pesquisa é
efetuada durante a codificação dos blocos e na atualização do dicionário, no final da codificação
de cada bloco. Para tal, foram criados quatro funções a executar pela GPU, denominados de
kernels. Os dois primeiros kernels são executados durante a codificação dos blocos têm como
objetivo devolver o melhor elemento do dicionário para cada partição do bloco. O terceiro e
quarto kernels são responsáveis pela atualização do dicionário: um efetua o controlo de
redundância dos elementos do dicionário e o outro fica responsável por atualizar o dicionário
presente em memória da GPU.
Os dois protótipos implementados, CUDA-MMP e OpenCL-MMP, permitiram ganhos de
tempo na ordem de 2 a 10 vezes, mantendo a qualidade de compressão original. Estes resultados
permitem concluir que os paradigmas de multi-núcleo permitem acelerar o desempenho das
aplicações. Contudo, este desempenho apenas é conseguido com um estudo iterativo dos
kernels e otimizando-os de forma a garantir o máximo de desempenho por parte da GPU