1,579 research outputs found
Properties of continuous Fourier extension of the discrete cosine transform and its multidimensional generalization
A versatile method is described for the practical computation of the discrete
Fourier transforms (DFT) of a continuous function given by its values
at the points of a uniform grid generated by conjugacy classes
of elements of finite adjoint order in the fundamental region of
compact semisimple Lie groups. The present implementation of the method is for
the groups SU(2), when is reduced to a one-dimensional segment, and for
in multidimensional cases. This simplest case
turns out to result in a transform known as discrete cosine transform (DCT),
which is often considered to be simply a specific type of the standard DFT.
Here we show that the DCT is very different from the standard DFT when the
properties of the continuous extensions of these two discrete transforms from
the discrete grid points to all points are
considered. (A) Unlike the continuous extension of the DFT, the continuous
extension of (the inverse) DCT, called CEDCT, closely approximates
between the grid points . (B) For increasing , the derivative of CEDCT
converges to the derivative of . And (C), for CEDCT the principle of
locality is valid. Finally, we use the continuous extension of 2-dimensional
DCT to illustrate its potential for interpolation, as well as for the data
compression of 2D images.Comment: submitted to JMP on April 3, 2003; still waiting for the referee's
Repor
Web-based manipulation of multiresolution micro-CT images
Micro Computed-Tomography (mu-CT) scanning is opening a new world for medical researchers. Scientific data of several tens of gigabytes per image is created and usually requires storage on a common server such as Picture Archiving and Communication Systems (PACS). Previewing this data online in a meaningful way is an essential part of these systems. Radiologists who have been working with CT data for a long time are commonly looking at two-dimensional slices of 3D image stacks. Conventional web-viewers such as Google Maps and Deep Zoom use tiled multiresolution-images for faster display of large 2D data. In the medical area this approach is being adapted for high resolution 2D images. Solutions that include basic image processing still rely on browser external solutions and high-performance client-machines. In this paper we optimized and modified Brain Maps API to create an interactive orthogonal-sectioning image viewer for medical mu-CT scans, based on JavaScript and HTML5. We show that tiling of images reduces the processing time by a factor of two. Different file formats are compared regarding their quality and time to display. As well a sample end-to-end application demonstrates the feasibility of this solution for custom made image acquisition systems
Compression Methods for Structured Floating-Point Data and their Application in Climate Research
The use of new technologies, such as GPU boosters, have led to a dramatic
increase in the computing power of High-Performance Computing (HPC)
centres. This development, coupled with new climate models that can better
utilise this computing power thanks to software development and internal
design, led to the bottleneck moving from solving the differential equations
describing Earth’s atmospheric interactions to actually storing the variables.
The current approach to solving the storage problem is inadequate: either
the number of variables to be stored is limited or the temporal resolution
of the output is reduced. If it is subsequently determined that another vari-
able is required which has not been saved, the simulation must run again.
This thesis deals with the development of novel compression algorithms
for structured floating-point data such as climate data so that they can be
stored in full resolution.
Compression is performed by decorrelation and subsequent coding of
the data. The decorrelation step eliminates redundant information in the
data. During coding, the actual compression takes place and the data is
written to disk. A lossy compression algorithm additionally has an approx-
imation step to unify the data for better coding. The approximation step
reduces the complexity of the data for the subsequent coding, e.g. by using
quantification. This work makes a new scientific contribution to each of the
three steps described above.
This thesis presents a novel lossy compression method for time-series
data using an Auto Regressive Integrated Moving Average (ARIMA) model
to decorrelate the data. In addition, the concept of information spaces and
contexts is presented to use information across dimensions for decorrela-
tion. Furthermore, a new coding scheme is described which reduces the
weaknesses of the eXclusive-OR (XOR) difference calculation and achieves
a better compression factor than current lossless compression methods for
floating-point numbers. Finally, a modular framework is introduced that
allows the creation of user-defined compression algorithms.
The experiments presented in this thesis show that it is possible to in-
crease the information content of lossily compressed time-series data by
applying an adaptive compression technique which preserves selected data
with higher precision. An analysis for lossless compression of these time-
series has shown no success. However, the lossy ARIMA compression model
proposed here is able to capture all relevant information. The reconstructed
data can reproduce the time-series to such an extent that statistically rele-
vant information for the description of climate dynamics is preserved.
Experiments indicate that there is a significant dependence of the com-
pression factor on the selected traversal sequence and the underlying data
model. The influence of these structural dependencies on prediction-based
compression methods is investigated in this thesis. For this purpose, the
concept of Information Spaces (IS) is introduced. IS contributes to improv-
ing the predictions of the individual predictors by nearly 10% on average.
Perhaps more importantly, the standard deviation of compression results is
on average 20% lower. Using IS provides better predictions and consistent
compression results.
Furthermore, it is shown that shifting the prediction and true value leads
to a better compression factor with minimal additional computational costs.
This allows the use of more resource-efficient prediction algorithms to
achieve the same or better compression factor or higher throughput during
compression or decompression. The coding scheme proposed here achieves
a better compression factor than current state-of-the-art methods.
Finally, this paper presents a modular framework for the development
of compression algorithms. The framework supports the creation of user-
defined predictors and offers functionalities such as the execution of bench-
marks, the random subdivision of n-dimensional data, the quality evalua-
tion of predictors, the creation of ensemble predictors and the execution of
validity tests for sequential and parallel compression algorithms.
This research was initiated because of the needs of climate science, but
the application of its contributions is not limited to it. The results of this the-
sis are of major benefit to develop and improve any compression algorithm
for structured floating-point data
Audiovisual preservation strategies, data models and value-chains
This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files
Need for speed:Achieving fast image processing in acute stroke care
This thesis aims to investigate the use of high-performance computing (HPC) techniques in developing imaging biomarkers to support the clinical workflow of acute stroke patients. In the first part of this thesis, we evaluate different HPC technologies and how such technologies can be leveraged by different image analysis applications used in the context of acute stroke care. More specifically, we evaluated how computers with multiple computing devices can be used to accelerate medical imaging applications in Chapter 2. Chapter 3 proposes a novel data compression technique that efficiently processes CT perfusion (CTP) images in GPUs. Unfortunately, the size of CTP datasets makes data transfers to computing devices time-consuming and, therefore, unsuitable in acute situations. Chapter 4 further evaluates the algorithm's usefulness proposed in Chapter 3 with two different applications: a double threshold segmentation and a time-intensity profile similarity (TIPS) bilateral filter to reduce noise in CTP scans. Finally, Chapter 5 presents a cloud platform for deploying high-performance medical applications for acute stroke patients. In Part 2 of this thesis, Chapter 6 presents a convolutional neural network (CNN) for detecting and volumetric segmentation of subarachnoid hemorrhages (SAH) in non-contrast CT scans. Chapter 7 proposed another method based on CNNs to quantify the final infarct volumes in follow-up non-contrast CT scans from ischemic stroke patients
Learned Variable-Rate Image Compression with Residual Divisive Normalization
Recently it has been shown that deep learning-based image compression has
shown the potential to outperform traditional codecs. However, most existing
methods train multiple networks for multiple bit rates, which increases the
implementation complexity. In this paper, we propose a variable-rate image
compression framework, which employs more Generalized Divisive Normalization
(GDN) layers than previous GDN-based methods. Novel GDN-based residual
sub-networks are also developed in the encoder and decoder networks. Our scheme
also uses a stochastic rounding-based scalable quantization. To further improve
the performance, we encode the residual between the input and the reconstructed
image from the decoder network as an enhancement layer. To enable a single
model to operate with different bit rates and to learn multi-rate image
features, a new objective function is introduced. Experimental results show
that the proposed framework trained with variable-rate objective function
outperforms all standard codecs such as H.265/HEVC-based BPG and
state-of-the-art learning-based variable-rate methods.Comment: 6 pages, 5 figure
Point cloud data compression
The rapid growth in the popularity of Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) experiences have resulted in an exponential surge of three-dimensional data. Point clouds have emerged as a commonly employed representation for capturing and visualizing three-dimensional data in these environments. Consequently, there has been a substantial research effort dedicated to developing efficient compression algorithms for point cloud data. This Master's thesis aims to investigate the current state-of-the-art lossless point cloud geometry compression techniques, explore some of these techniques in more detail and then propose improvements and/or extensions to enhance them and provide directions for future work on this topic
- …