581 research outputs found
Compressing Inertial Motion Data in Wireless Sensing Systems – An Initial Experiment
The use of wireless inertial motion sensors, such as accelerometers, for supporting medical care and sport’s training, has been under investigation in recent years. As the number of sensors (or their sampling rates) increases, compressing data at source(s) (i.e. at the sensors), i.e. reducing the quantity of data that needs to be transmitted between the on-body sensors and the remote repository, would be essential especially in a bandwidth-limited wireless environment. This paper presents a set of compression experiment results on a set of inertial motion data collected during running exercises. As a starting point, we selected a set of common compression algorithms to experiment with. Our results show that, conventional lossy compression algorithms would achieve a desirable compression ratio with an acceptable time delay. The results also show that the quality of the decompressed data is within acceptable range
Data Compression in the Petascale Astronomy Era: a GERLUMPH case study
As the volume of data grows, astronomers are increasingly faced with choices
on what data to keep -- and what to throw away. Recent work evaluating the
JPEG2000 (ISO/IEC 15444) standards as a future data format standard in
astronomy has shown promising results on observational data. However, there is
still a need to evaluate its potential on other type of astronomical data, such
as from numerical simulations. GERLUMPH (the GPU-Enabled High Resolution
cosmological MicroLensing parameter survey) represents an example of a data
intensive project in theoretical astrophysics. In the next phase of processing,
the ~27 terabyte GERLUMPH dataset is set to grow by a factor of 100 -- well
beyond the current storage capabilities of the supercomputing facility on which
it resides. In order to minimise bandwidth usage, file transfer time, and
storage space, this work evaluates several data compression techniques.
Specifically, we investigate off-the-shelf and custom lossless compression
algorithms as well as the lossy JPEG2000 compression format. Results of
lossless compression algorithms on GERLUMPH data products show small
compression ratios (1.35:1 to 4.69:1 of input file size) varying with the
nature of the input data. Our results suggest that JPEG2000 could be suitable
for other numerical datasets stored as gridded data or volumetric data. When
approaching lossy data compression, one should keep in mind the intended
purposes of the data to be compressed, and evaluate the effect of the loss on
future analysis. In our case study, lossy compression and a high compression
ratio do not significantly compromise the intended use of the data for
constraining quasar source profiles from cosmological microlensing.Comment: 15 pages, 9 figures, 5 tables. Published in the Special Issue of
Astronomy & Computing on The future of astronomical data format
Efficient reconfigurable architectures for 3D medical image compression
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Recently, the more widespread use of three-dimensional (3-D) imaging modalities,
such as magnetic resonance imaging (MRI), computed tomography (CT), positron
emission tomography (PET), and ultrasound (US) have generated a massive amount
of volumetric data. These have provided an impetus to the development of other
applications, in particular telemedicine and teleradiology. In these fields, medical
image compression is important since both efficient storage and transmission of data
through high-bandwidth digital communication lines are of crucial importance.
Despite their advantages, most 3-D medical imaging algorithms are computationally intensive with matrix transformation as the most fundamental operation involved in the transform-based methods. Therefore, there is a real need for high-performance systems, whilst keeping architectures exible to allow
for quick upgradeability with real-time applications. Moreover, in order to obtain
efficient solutions for large medical volumes data, an efficient implementation of
these operations is of significant importance. Reconfigurable hardware, in the form of field programmable gate arrays (FPGAs) has been proposed as viable system
building block in the construction of high-performance systems at an economical price.
Consequently, FPGAs seem an ideal candidate to harness and exploit their inherent
advantages such as massive parallelism capabilities, multimillion gate counts, and
special low-power packages. The key achievements of the work presented in this thesis are summarised as follows. Two architectures for 3-D Haar wavelet transform (HWT) have been proposed based on transpose-based computation and partial reconfiguration suitable for 3-D medical imaging applications. These applications require continuous hardware servicing, and as a result dynamic partial reconfiguration (DPR) has been introduced. Comparative study for both non-partial and partial reconfiguration implementation has shown that DPR offers many advantages and leads to a compelling solution for implementing computationally intensive applications such as 3-D medical image compression. Using DPR, several large systems are mapped to small hardware resources, and the area, power consumption as well as maximum frequency are
optimised and improved. Moreover, an FPGA-based architecture of the finite Radon transform (FRAT)with three design strategies has been proposed: direct implementation of pseudo-code with a sequential or pipelined description, and block random access memory (BRAM)- based method. An analysis with various medical imaging modalities has been carried out. Results obtained for image de-noising implementation using FRAT exhibits
promising results in reducing Gaussian white noise in medical images. In terms of
hardware implementation, promising trade-offs on maximum frequency, throughput
and area are also achieved. Furthermore, a novel hardware implementation of 3-D medical image compression system with context-based adaptive variable length coding (CAVLC)
has been proposed. An evaluation of the 3-D integer transform (IT) and the discrete
wavelet transform (DWT) with lifting scheme (LS) for transform blocks reveal that
3-D IT demonstrates better computational complexity than the 3-D DWT, whilst
the 3-D DWT with LS exhibits a lossless compression that is significantly useful for
medical image compression. Additionally, an architecture of CAVLC that is capable
of compressing high-definition (HD) images in real-time without any buffer between
the quantiser and the entropy coder is proposed. Through a judicious parallelisation, promising results have been obtained with limited resources. In summary, this research is tackling the issues of massive 3-D medical volumes data that requires compression as well as hardware implementation to accelerate the
slowest operations in the system. Results obtained also reveal a significant achievement in terms of the architecture efficiency and applications performance.Ministry of Higher Education Malaysia (MOHE),
Universiti Tun Hussein Onn Malaysia (UTHM) and the British Counci
Volumetric Medical Images Visualization on Mobile Devices
Volumetric medical images visualization is an important tool in the diagnosis
and treatment of diseases. Through history, one of the most dificult
tasks for Medicine Specialists has been the accurate location of broken bones
and of the damaged tissues during Chemotherapy treatment, among other
applications; like techniques used in Neurological Studies. Thus these situations
enhance the need of visualization in Medicine. New technologies,
the improvement and development of new hardware as well as software and
the updating of old ones for graphic applications have resulted in specialized
systems for medical visualization. However the use of these techniques
in mobile devices has been poor due to its low performance. In our work,
we propose a client-server scheme, where the model is compressed in the
server side and is reconstructed in a nal thin-client device. The technique
restricts the natural density values to achieve good bone visualization in
medical models, transforming the rest of the data to zero. Our proposal
uses a tridimensional Haar Wavelet Function locally applied inside units
blocks of 16x16x16, similar to the Wavelet Based 3D Compression Scheme
for Interactive Visualization of Very Large Volume Data approach. We also
implement a quantization algorithm which handles error coeficients according
to the frequency distributions of these coe cients. Finally, we made
an evaluation of the volume visualization; on current mobile devices .We
present the speci cations for the implementation of our technique in the
Nokia n900 Mobile Phone
GPU acceleration of predictive partitioned vector quantization for ultraspectral sounder data compression
[[abstract]]For the large-volume ultraspectral sounder data, compression is desirable to save storage space and transmission time. To retrieve the geophysical paramters without losing precision the ultraspectral sounder data compression has to be lossless. Recently there is a boom on the use of graphic processor units (GPU) for speedup of scientific computations. By identifying the time dominant portions of the code that can be executed in parallel, significant speedup can be achieved by using GPU. Predictive partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression scheme for ultraspectral sounder data. It consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding. Two most time consuming stages of linear prediction and vector quantization are chosen for GPU-based implementation. By exploiting the data parallel characteristics of these two stages, a spatial division design shows a speedup of 72x in our four-GPU-based implementation of the PPVQ compression scheme.[[notice]]補正完畢[[journaltype]]國外[[incitationindex]]SCI[[booktype]]紙本[[countrycodes]]US
- …