374 research outputs found
Map online system using internet-based image catalogue
Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented
Parallel Implementation of Lossy Data Compression for Temporal Data Sets
Many scientific data sets contain temporal dimensions. These are the data
storing information at the same spatial location but different time stamps.
Some of the biggest temporal datasets are produced by parallel computing
applications such as simulations of climate change and fluid dynamics. Temporal
datasets can be very large and cost a huge amount of time to transfer among
storage locations. Using data compression techniques, files can be transferred
faster and save storage space. NUMARCK is a lossy data compression algorithm
for temporal data sets that can learn emerging distributions of element-wise
change ratios along the temporal dimension and encodes them into an index table
to be concisely represented. This paper presents a parallel implementation of
NUMARCK. Evaluated with six data sets obtained from climate and astrophysics
simulations, parallel NUMARCK achieved scalable speedups of up to 8788 when
running 12800 MPI processes on a parallel computer. We also compare the
compression ratios against two lossy data compression algorithms, ISABELA and
ZFP. The results show that NUMARCK achieved higher compression ratio than
ISABELA and ZFP.Comment: 10 pages, HiPC 201
Hybrid Algorithmic Approach for Medical Image Compression Based on Discrete Wavelet Transform (DWT) and Huffman Techniques for Cloud Computing
As medical imaging facilities move towards complete filmless imaging and also generate a large volume of image data through various advance medical modalities, the ability to store, share and transfer images on a cloud-based system is essential for maximizing efficiencies. The major issue that arises in teleradiology is the difficulty of transmitting large volume of medical data with relatively low bandwidth. Image compression techniques have increased the viability by reducing the bandwidth requirement and cost-effective delivery of medical images for primary diagnosis.Wavelet transformation is widely used in the fields of image compression because they allow analysis of images at various levels of resolution and good characteristics. The algorithm what is discussed in this paper employs wavelet toolbox of MATLAB. Multilevel decomposition of the original image is performed by using Haar wavelet transform and then image is quantified and coded based on Huffman technique. The wavelet packet has been applied for reconstruction of the compressed image. The simulation results show that the algorithm has excellent effects in the image reconstruction and better compression ratio and also study shows that valuable in medical image compression on cloud platfor
Speech Compression Using Discrete Wavelet Transform
Speech compression is an area of digital processing that is focusing on reducing
bit rate of the speech signal for transmission or storage without significant loss of
quality. Wavelet transform has been recently proposed for signal analysis. Speech signal
compression using wavelet transform is given a considerable attention in this thesis.
Speech coding is a lossy scheme and is implemented here to compress onedimensional
speech signal. Basically, this scheme consists of four operations which are
the transform, threshold techniques (by level and global threshold), quantization, and
entropy encoding operations. The reconstruction of the compressed signal as well as the
detailed steps needed are discussed.The performance of wavelet compression is compared against linear Productive
Coding and Global System for Mobile Communication (GSM) algorithms using SNR,
PSNR, NRMSE and compression ratio.
Software simulating the lossy compression scheme is developed using Matlab 6.
This software provides the basic speech analysis as well as the compression and
decompression operations. The results obtained show reasonably high compression ratio
and good signal quality
SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks
The fast growth of computational power and scales of modern super-computing
systems have raised great challenges for the management of exascale scientific
data. To maintain the usability of scientific data, error-bound lossy
compression is proposed and developed as an essential technique for the size
reduction of scientific data with constrained data distortion. Among the
diverse datasets generated by various scientific simulations, certain datasets
cannot be effectively compressed by existing error-bounded lossy compressors
with traditional techniques. The recent success of Artificial Intelligence has
inspired several researchers to integrate neural networks into error-bounded
lossy compressors. However, those works still suffer from limited compression
ratios and/or extremely low efficiencies. To address those issues and improve
the compression on the hard-to-compress datasets, in this paper, we propose
SRN-SZ, which is a deep learning-based scientific error-bounded lossy
compressor leveraging the hierarchical data grid expansion paradigm implemented
by super-resolution neural networks. SRN-SZ applies the most advanced
super-resolution network HAT for its compression, which is free of time-costing
per-data training. In experiments compared with various state-of-the-art
compressors, SRN-SZ achieves up to 75% compression ratio improvements under the
same error bound and up to 80% compression ratio improvements under the same
PSNR than the second-best compressor
Quantifying and containing the curse of high resolution coronal imaging
Future missions such as Solar Orbiter (SO), InterHelioprobe, or Solar Probe
aim at approaching the Sun closer than ever before, with on board some high
resolution imagers (HRI) having a subsecond cadence and a pixel area of about
at the Sun during perihelion. In order to guarantee their scientific
success, it is necessary to evaluate if the photon counts available at these
resolution and cadence will provide a sufficient signal-to-noise ratio (SNR).
We perform a first step in this direction by analyzing and characterizing the
spatial intermittency of Quiet Sun images thanks to a multifractal analysis.
We identify the parameters that specify the scale-invariance behavior. This
identification allows next to select a family of multifractal processes, namely
the Compound Poisson Cascades, that can synthesize artificial images having
some of the scale-invariance properties observed on the recorded images.
The prevalence of self-similarity in Quiet Sun coronal images makes it
relevant to study the ratio between the SNR present at SoHO/EIT images and in
coarsened images. SoHO/EIT images thus play the role of 'high resolution'
images, whereas the 'low-resolution' coarsened images are rebinned so as to
simulate a smaller angular resolution and/or a larger distance to the Sun. For
a fixed difference in angular resolution and in Spacecraft-Sun distance, we
determine the proportion of pixels having a SNR preserved at high resolution
given a particular increase in effective area. If scale-invariance continues to
prevail at smaller scales, the conclusion reached with SoHO/EIT images can be
transposed to the situation where the resolution is increased from SoHO/EIT to
SO/HRI resolution at perihelion.Comment: 25 pages, 1 table, 7 figure
- …