305 research outputs found
A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding
Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression
Fast Search Approaches for Fractal Image Coding: Review of Contemporary Literature
Fractal Image Compression FIC as a model was conceptualized in the 1989 In furtherance there are numerous models that has been developed in the process Existence of fractals were initially observed and depicted in the Iterated Function System IFS and the IFS solutions were used for encoding images The process of IFS pertaining to any image constitutes much lesser space for recording than the actual image which has led to the development of representation the image using IFS form and how the image compression systems has taken shape It is very important that the time consumed for encoding has to be addressed for achieving optimal compression conditions and predominantly the inputs that are shared in the solutions proposed in the study depict the fact that despite of certain developments that has taken place still there are potential chances of scope for improvement From the review of exhaustive range of models that are depicted in the model it is evident that over period of time numerous advancements have taken place in the FCI model and is adapted at image compression in varied levels This study focus on the existing range of literature on FCI and the insights of various models has been depicted in this stud
Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective
Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
Parallel implementation of fractal image compression
Thesis (M.Sc.Eng.)-University of Natal, Durban, 2000.Fractal image compression exploits the piecewise self-similarity present in real images
as a form of information redundancy that can be eliminated to achieve compression. This
theory based on Partitioned Iterated Function Systems is presented. As an alternative to the
established JPEG, it provides a similar compression-ratio to fidelity trade-off. Fractal
techniques promise faster decoding and potentially higher fidelity, but the computationally
intensive compression process has prevented commercial acceptance.
This thesis presents an algorithm mapping the problem onto a parallel processor
architecture, with the goal of reducing the encoding time. The experimental work involved
implementation of this approach on the Texas Instruments TMS320C80 parallel processor
system. Results indicate that the fractal compression process is unusually well suited to
parallelism with speed gains approximately linearly related to the number of processors used.
Parallel processing issues such as coherency, management and interfacing are discussed. The
code designed incorporates pipelining and parallelism on all conceptual and practical levels
ensuring that all resources are fully utilised, achieving close to optimal efficiency.
The computational intensity was reduced by several means, including conventional
classification of image sub-blocks by content with comparisons across class boundaries
prohibited. A faster approach adopted was to perform estimate comparisons between blocks
based on pixel value variance, identifying candidates for more time-consuming, accurate
RMS inter-block comparisons. These techniques, combined with the parallelism, allow
compression of 512x512 pixel x 8 bit images in under 20 seconds, while maintaining a 30dB
PSNR. This is up to an order of magnitude faster than reported for conventional sequential
processor implementations. Fractal based compression of colour images and video sequences
is also considered.
The work confirms the potential of fractal compression techniques, and demonstrates
that a parallel implementation is appropriate for addressing the compression time problem.
The processor system used in these investigations is faster than currently available PC
platforms, but the relevance lies in the anticipation that future generations of affordable
processors will exceed its performance. The advantages of fractal image compression may
then be accessible to the average computer user, leading to commercial acceptance
Parallel implementation of fractal image compression
Thesis (M.Sc.Eng.)-University of Natal, Durban, 2000.Fractal image compression exploits the piecewise self-similarity present in real images
as a form of information redundancy that can be eliminated to achieve compression. This
theory based on Partitioned Iterated Function Systems is presented. As an alternative to the
established JPEG, it provides a similar compression-ratio to fidelity trade-off. Fractal
techniques promise faster decoding and potentially higher fidelity, but the computationally
intensive compression process has prevented commercial acceptance.
This thesis presents an algorithm mapping the problem onto a parallel processor
architecture, with the goal of reducing the encoding time. The experimental work involved
implementation of this approach on the Texas Instruments TMS320C80 parallel processor
system. Results indicate that the fractal compression process is unusually well suited to
parallelism with speed gains approximately linearly related to the number of processors used.
Parallel processing issues such as coherency, management and interfacing are discussed. The
code designed incorporates pipelining and parallelism on all conceptual and practical levels
ensuring that all resources are fully utilised, achieving close to optimal efficiency.
The computational intensity was reduced by several means, including conventional
classification of image sub-blocks by content with comparisons across class boundaries
prohibited. A faster approach adopted was to perform estimate comparisons between blocks
based on pixel value variance, identifying candidates for more time-consuming, accurate
RMS inter-block comparisons. These techniques, combined with the parallelism, allow
compression of 512x512 pixel x 8 bit images in under 20 seconds, while maintaining a 30dB
PSNR. This is up to an order of magnitude faster than reported for conventional sequential
processor implementations. Fractal based compression of colour images and video sequences
is also considered.
The work confirms the potential of fractal compression techniques, and demonstrates
that a parallel implementation is appropriate for addressing the compression time problem.
The processor system used in these investigations is faster than currently available PC
platforms, but the relevance lies in the anticipation that future generations of affordable
processors will exceed its performance. The advantages of fractal image compression may
then be accessible to the average computer user, leading to commercial acceptance
Recommended from our members
Stochastic dynamics and wavelets techniques for system response analysis and diagnostics: Diverse applications in structural and biomedical engineering
In the first part of the dissertation, a novel stochastic averaging technique based on a Hilbert transform definition of the oscillator response displacement amplitude is developed. In comparison to standard stochastic averaging, the requirement of “a priori” determination of an equivalent natural frequency is bypassed, yielding flexibility in the ensuing analysis and potentially higher accuracy. Further, the herein proposed Hilbert transform based stochastic averaging is adapted for determining the time-dependent survival probability and first-passage time probability density function of stochastically excited nonlinear oscillators, even endowed with fractional derivative terms. To this aim, a Galerkin scheme is utilized to solve approximately the backward Kolmogorov partial differential equation governing the survival probability of the oscillator response. Next, the potential of the stochastic averaging technique to be used in conjunction with performance-based engineering design applications is demonstrated by proposing a stochastic version of the widely used incremental dynamic analysis (IDA). Specifically, modeling the excitation as a non-stationary stochastic process possessing an evolutionary power spectrum (EPS), an approximate closed-form expression is derived for the parameterized oscillator response amplitude probability density function (PDF). In this regard, IDA surfaces are determined providing the conditional PDF of the engineering demand parameter (EDP) for a given intensity measure (IM) value. In contrast to the computationally expensive Monte Carlo simulation, the methodology developed herein determines the IDA surfaces at minimal computational cost.
In the second part of the dissertation, a novel multiple-input/single-output (MISO) system identification technique is developed for parameter identification of nonlinear and time-variant oscillators with fractional derivative terms subject to incomplete non-stationary data. The technique utilizes a representation of the nonlinear restoring forces as a set of parallel linear sub-systems. Next, a recently developed L1-norm minimization procedure based on compressive sensing theory is applied for determining the wavelet coefficients of the available incomplete non-stationary input-output (excitation-response) data. Several numerical examples are considered for assessing the reliability of the technique, even in the presence of incomplete and corrupted data. These include a 2-DOF time-variant Duffing oscillator endowed with fractional derivative terms, as well as a 2-DOF system subject to flow-induced forces where the non-stationary sea state possesses a recently proposed evolutionary version of the JONSWAP spectrum.
In the third part of this dissertation, a joint time-frequency analysis technique based on generalized harmonic wavelets (GHWs) is developed for dynamic cerebral autoregulation (DCA) performance quantification. DCA is the continuous counter-regulation of the cerebral blood flow by the active response of cerebral blood vessels to the spontaneous or induced blood pressure fluctuations. Specifically, various metrics of the phase shift and magnitude of appropriately defined GHW-based transfer functions are determined based on data points over the joint time-frequency domain. The potential of these metrics to be used as a diagnostics tool for indicating healthy versus impaired DCA function is assessed by considering both healthy individuals and patients with unilateral carotid artery stenosis. Next, another application in biomedical engineering is pursued related to the Pulse Wave Imaging (PWI) technique. This relies on ultrasonic signals for capturing the propagation of pressure pulses along the carotid artery, and eventually for prognosis of focal vascular diseases (e.g., atherosclerosis and abdominal aortic aneurysm). However, to obtain a high spatio-temporal resolution the data are acquired at a high rate, in the order of kilohertz, yielding large datasets. To address this challenge, an efficient data compression technique is developed based on the multiresolution wavelet decomposition scheme, which exploits the high correlation of adjacent RF-frames generated by the PWI technique. Further, a sparse matrix decomposition is proposed as an efficient way to identify the boundaries of the arterial wall in the PWI technique
Efficient reconfigurable architectures for 3D medical image compression
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Recently, the more widespread use of three-dimensional (3-D) imaging modalities,
such as magnetic resonance imaging (MRI), computed tomography (CT), positron
emission tomography (PET), and ultrasound (US) have generated a massive amount
of volumetric data. These have provided an impetus to the development of other
applications, in particular telemedicine and teleradiology. In these fields, medical
image compression is important since both efficient storage and transmission of data
through high-bandwidth digital communication lines are of crucial importance.
Despite their advantages, most 3-D medical imaging algorithms are computationally intensive with matrix transformation as the most fundamental operation involved in the transform-based methods. Therefore, there is a real need for high-performance systems, whilst keeping architectures exible to allow
for quick upgradeability with real-time applications. Moreover, in order to obtain
efficient solutions for large medical volumes data, an efficient implementation of
these operations is of significant importance. Reconfigurable hardware, in the form of field programmable gate arrays (FPGAs) has been proposed as viable system
building block in the construction of high-performance systems at an economical price.
Consequently, FPGAs seem an ideal candidate to harness and exploit their inherent
advantages such as massive parallelism capabilities, multimillion gate counts, and
special low-power packages. The key achievements of the work presented in this thesis are summarised as follows. Two architectures for 3-D Haar wavelet transform (HWT) have been proposed based on transpose-based computation and partial reconfiguration suitable for 3-D medical imaging applications. These applications require continuous hardware servicing, and as a result dynamic partial reconfiguration (DPR) has been introduced. Comparative study for both non-partial and partial reconfiguration implementation has shown that DPR offers many advantages and leads to a compelling solution for implementing computationally intensive applications such as 3-D medical image compression. Using DPR, several large systems are mapped to small hardware resources, and the area, power consumption as well as maximum frequency are
optimised and improved. Moreover, an FPGA-based architecture of the finite Radon transform (FRAT)with three design strategies has been proposed: direct implementation of pseudo-code with a sequential or pipelined description, and block random access memory (BRAM)- based method. An analysis with various medical imaging modalities has been carried out. Results obtained for image de-noising implementation using FRAT exhibits
promising results in reducing Gaussian white noise in medical images. In terms of
hardware implementation, promising trade-offs on maximum frequency, throughput
and area are also achieved. Furthermore, a novel hardware implementation of 3-D medical image compression system with context-based adaptive variable length coding (CAVLC)
has been proposed. An evaluation of the 3-D integer transform (IT) and the discrete
wavelet transform (DWT) with lifting scheme (LS) for transform blocks reveal that
3-D IT demonstrates better computational complexity than the 3-D DWT, whilst
the 3-D DWT with LS exhibits a lossless compression that is significantly useful for
medical image compression. Additionally, an architecture of CAVLC that is capable
of compressing high-definition (HD) images in real-time without any buffer between
the quantiser and the entropy coder is proposed. Through a judicious parallelisation, promising results have been obtained with limited resources. In summary, this research is tackling the issues of massive 3-D medical volumes data that requires compression as well as hardware implementation to accelerate the
slowest operations in the system. Results obtained also reveal a significant achievement in terms of the architecture efficiency and applications performance.Ministry of Higher Education Malaysia (MOHE),
Universiti Tun Hussein Onn Malaysia (UTHM) and the British Counci
3D Medical Image Lossless Compressor Using Deep Learning Approaches
The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in efficient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies effectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an effective compression method. Since medical information is critical and imposes an influential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable efforts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information efficiently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the differences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the effectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by different hospitals, representing different body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are significantly more informative than others, specifically in medical domains where samples are available on a scale of billions. The effectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling
- …