63,358 research outputs found
A comparative study on improvement of image compression method using hybrid of DCT and DWT techniques with huffman encoding
Image is an important media used to visualize or represent a message in daily conversation
between users device. Nowadays, there are many application that involve image
processing such as security system, communication system and medical system where
images are processed digitally. Image is mainly known for its large data capacity
especially high resolution image. Thus, image compression is important to reduce storage
size and achieve specific application goals. In this research, hybrid of Discrete Cosine
Transform (DCT), Discrete Wavelet Transform (DWT) and Huffman compression
technique is proposed. Stand-alone technique of DCT, DWT and Huffman are execute
before hybrid all techniques together. Besides, the performance in determining the quality
of image, compression ratio and computing time are carefully observed by evaluating the
result of Mean Square Error (MSE), Power Signal to Noise Ratio (PSNR), Structural
Similarity (SSIM), compression ratio and time of compression and decompression. It is
found that the proposed hybrid technique able to reduce storage size with 3.72:1
compression ratio and short computing time with 5 second. The quality of image is slightly
reduce compared to original image which are calculated based on MSE, PSNR and SSIM
value with 52.74, 30.92 dB and 0.90, respectively. In conclusion, DWT technique has the
ability in compressing image size within short time while DCT and Huffman are able to
reduce data loss during compression and maintaining good quality of image. Therefore,
DCT, DWT and Huffman method are combined together to support each other in
producing good performance
Recommended from our members
Parallel data compression
Data compression schemes remove data redundancy in communicated and stored data and increase the effective capacities of communication and storage devices. Parallel algorithms and implementations for textual data compression are surveyed. Related concepts from parallel computation and information theory are briefly discussed. Static and dynamic methods for codeword construction and transmission on various models of parallel computation are described. Included are parallel methods which boost system speed by coding data concurrently, and approaches which employ multiple compression techniques to improve compression ratios. Theoretical and empirical comparisons are reported and areas for future research are suggested
Source and Physical-Layer Network Coding for Correlated Two-Way Relaying
In this paper, we study a half-duplex two-way relay channel (TWRC) with
correlated sources exchanging bidirectional information. In the case, when both
sources have the knowledge of correlation statistics, a source compression with
physical-layer network coding (SCPNC) scheme is proposed to perform the
distributed compression at each source node. When only the relay has the
knowledge of correlation statistics, we propose a relay compression with
physical-layer network coding (RCPNC) scheme to compress the bidirectional
messages at the relay. The closed-form block error rate (BLER) expressions of
both schemes are derived and verified through simulations. It is shown that the
proposed schemes achieve considerable improvements in both error performance
and throughput compared with the conventional non-compression scheme in
correlated two-way relay networks (CTWRNs).Comment: 15 pages, 6 figures. IET Communications, 201
Iterative Slepian-Wolf Decoding and FEC Decoding for Compress-and-Forward Systems
While many studies have concentrated on providing theoretical analysis for the relay assisted compress-and-forward systems little effort has yet been made to the construction and evaluation of a practical system. In this paper a practical CF system incorporating an error-resilient multilevel Slepian-Wolf decoder is introduced and a novel iterative processing structure which allows information exchanging between the Slepian-Wolf decoder and the forward error correction decoder of the main source message is proposed. In addition, a new quantization scheme is incorporated as well to avoid the complexity of the reconstruction of the relay signal at the final decoder of the destination. The results demonstrate that the iterative structure not only reduces the decoding loss of the Slepian-Wolf decoder, it also improves the decoding performance of the main message from the source
Statistical mechanics of lossy data compression using a non-monotonic perceptron
The performance of a lossy data compression scheme for uniformly biased
Boolean messages is investigated via methods of statistical mechanics. Inspired
by a formal similarity to the storage capacity problem in the research of
neural networks, we utilize a perceptron of which the transfer function is
appropriately designed in order to compress and decode the messages. Employing
the replica method, we analytically show that our scheme can achieve the
optimal performance known in the framework of lossy compression in most cases
when the code length becomes infinity. The validity of the obtained results is
numerically confirmed.Comment: 9 pages, 5 figures, Physical Review
IETF standardization in the field of the Internet of Things (IoT): a survey
Smart embedded objects will become an important part of what is called the Internet of Things. However, the integration of embedded devices into the Internet introduces several challenges, since many of the existing Internet technologies and protocols were not designed for this class of devices. In the past few years, there have been many efforts to enable the extension of Internet technologies to constrained devices. Initially, this resulted in proprietary protocols and architectures. Later, the integration of constrained devices into the Internet was embraced by IETF, moving towards standardized IP-based protocols. In this paper, we will briefly review the history of integrating constrained devices into the Internet, followed by an extensive overview of IETF standardization work in the 6LoWPAN, ROLL and CoRE working groups. This is complemented with a broad overview of related research results that illustrate how this work can be extended or used to tackle other problems and with a discussion on open issues and challenges. As such the aim of this paper is twofold: apart from giving readers solid insights in IETF standardization work on the Internet of Things, it also aims to encourage readers to further explore the world of Internet-connected objects, pointing to future research opportunities
How to Solve the Fronthaul Traffic Congestion Problem in H-CRAN?
The design of efficient wireless fronthaul connections for future heterogeneous networks incorporating emerging paradigms such as heterogeneous cloud radio access network (H-CRAN) has become a challenging task that requires the most effective utilization of fronthaul network resources. In this paper, we propose and analyze possible solutions to facilitate the fronthaul traffic congestion in the scenario of Coordinated Multi-Point (CoMP) for 5G cellular traffic which is expected to reach ZetaByte by 2017. In particular, we propose to use distributed compression to reduce the fronthaul traffic for H-CRAN. Unlike the conventional approach where each coordinating point quantizes and forwards its own observation to the processing centre, these observations are compressed before forwarding. At the processing centre, the decompression of the observations and the decoding of the user messages are conducted in a joint manner. Our results reveal that, in both dense and ultra-dense urban small cell deployment scenarios, the usage of distributed compression can efficiently reduce the required fronthaul rate by more than 50% via joint operation
- …