233 research outputs found

    Comparative analysis of various Image compression techniques for Quasi Fractal lossless compression

    Get PDF
    The most important Entity to be considered in Image Compression methods are Paek to signal noise ratio and Compression ratio. These two parameters are considered to judge the quality of any Image.and they a play vital role in any Image processing applications. Biomedical domain is one of the critical areas where more image datasets are involved for analysis and biomedical image compression is very, much essential. Basically, compression techniques are classified into lossless and lossy. As the name indicates, in the lossless technique the image is compressed without any loss of data. But in the lossy, some information may loss. Here both lossy & lossless techniques for an image compression are used. In this research different compression approaches of these two categories are discussed and brain images for compression techniques are highlighted. Both lossy and lossless techniques are implemented by studying it’s advantages and disadvantages. For this research two important quality parameters i.e. CR & PSNR are calculated. Here existing techniques DCT, DFT, DWT & Fractal are implemented and introduced new techniques i.e Oscillation Concept method, BTC-SPIHT & Hybrid technique using adaptive threshold & Quasi Fractal Algorithm

    A Deep Learning Approach for Vital Signs Compression and Energy Efficient Delivery in mhealth Systems

    Get PDF
    © 2013 IEEE. Due to the increasing number of chronic disease patients, continuous health monitoring has become the top priority for health-care providers and has posed a major stimulus for the development of scalable and energy efficient mobile health systems. Collected data in such systems are highly critical and can be affected by wireless network conditions, which in return, motivates the need for a preprocessing stage that optimizes data delivery in an adaptive manner with respect to network dynamics. We present in this paper adaptive single and multiple modality data compression schemes based on deep learning approach, which consider acquired data characteristics and network dynamics for providing energy efficient data delivery. Results indicate that: 1) the proposed adaptive single modality compression scheme outperforms conventional compression methods by 13.24% and 43.75% reductions in distortion and processing time, respectively; 2) the proposed adaptive multiple modality compression further decreases the distortion by 3.71% and 72.37% when compared with the proposed single modality scheme and conventional methods through leveraging inter-modality correlations; and 3) adaptive multiple modality compression demonstrates its efficiency in terms of energy consumption, computational complexity, and responding to different network states. Hence, our approach is suitable for mobile health applications (mHealth), where the smart preprocessing of vital signs can enhance energy consumption, reduce storage, and cut down transmission delays to the mHealth cloud.This work was supported by NPRP through the Qatar National Research Fund (a member of the Qatar Foundation) under Grant 7-684-1-127

    Efficient reconfigurable architectures for 3D medical image compression

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In these fields, medical image compression is important since both efficient storage and transmission of data through high-bandwidth digital communication lines are of crucial importance. Despite their advantages, most 3-D medical imaging algorithms are computationally intensive with matrix transformation as the most fundamental operation involved in the transform-based methods. Therefore, there is a real need for high-performance systems, whilst keeping architectures exible to allow for quick upgradeability with real-time applications. Moreover, in order to obtain efficient solutions for large medical volumes data, an efficient implementation of these operations is of significant importance. Reconfigurable hardware, in the form of field programmable gate arrays (FPGAs) has been proposed as viable system building block in the construction of high-performance systems at an economical price. Consequently, FPGAs seem an ideal candidate to harness and exploit their inherent advantages such as massive parallelism capabilities, multimillion gate counts, and special low-power packages. The key achievements of the work presented in this thesis are summarised as follows. Two architectures for 3-D Haar wavelet transform (HWT) have been proposed based on transpose-based computation and partial reconfiguration suitable for 3-D medical imaging applications. These applications require continuous hardware servicing, and as a result dynamic partial reconfiguration (DPR) has been introduced. Comparative study for both non-partial and partial reconfiguration implementation has shown that DPR offers many advantages and leads to a compelling solution for implementing computationally intensive applications such as 3-D medical image compression. Using DPR, several large systems are mapped to small hardware resources, and the area, power consumption as well as maximum frequency are optimised and improved. Moreover, an FPGA-based architecture of the finite Radon transform (FRAT)with three design strategies has been proposed: direct implementation of pseudo-code with a sequential or pipelined description, and block random access memory (BRAM)- based method. An analysis with various medical imaging modalities has been carried out. Results obtained for image de-noising implementation using FRAT exhibits promising results in reducing Gaussian white noise in medical images. In terms of hardware implementation, promising trade-offs on maximum frequency, throughput and area are also achieved. Furthermore, a novel hardware implementation of 3-D medical image compression system with context-based adaptive variable length coding (CAVLC) has been proposed. An evaluation of the 3-D integer transform (IT) and the discrete wavelet transform (DWT) with lifting scheme (LS) for transform blocks reveal that 3-D IT demonstrates better computational complexity than the 3-D DWT, whilst the 3-D DWT with LS exhibits a lossless compression that is significantly useful for medical image compression. Additionally, an architecture of CAVLC that is capable of compressing high-definition (HD) images in real-time without any buffer between the quantiser and the entropy coder is proposed. Through a judicious parallelisation, promising results have been obtained with limited resources. In summary, this research is tackling the issues of massive 3-D medical volumes data that requires compression as well as hardware implementation to accelerate the slowest operations in the system. Results obtained also reveal a significant achievement in terms of the architecture efficiency and applications performance.Ministry of Higher Education Malaysia (MOHE), Universiti Tun Hussein Onn Malaysia (UTHM) and the British Counci

    Speckle Noise Reduction in Medical Ultrasound Images

    Get PDF
    Ultrasound imaging is an incontestable vital tool for diagnosis, it provides in non-invasive manner the internal structure of the body to detect eventually diseases or abnormalities tissues. Unfortunately, the presence of speckle noise in these images affects edges and fine details which limit the contrast resolution and make diagnostic more difficult. In this paper, we propose a denoising approach which combines logarithmic transformation and a non linear diffusion tensor. Since speckle noise is multiplicative and nonwhite process, the logarithmic transformation is a reasonable choice to convert signaldependent or pure multiplicative noise to an additive one. The key idea from using diffusion tensor is to adapt the flow diffusion towards the local orientation by applying anisotropic diffusion along the coherent structure direction of interesting features in the image. To illustrate the effective performance of our algorithm, we present some experimental results on synthetically and real echographic images

    The quest for "diagnostically lossless" medical image compression using objective image quality measures

    Get PDF
    Given the explosive growth of digital image data being generated, medical communities worldwide have recognized the need for increasingly efficient methods of storage, display and transmission of medical images. For this reason lossy image compression is inevitable. Furthermore, it is absolutely essential to be able to determine the degree to which a medical image can be compressed before its “diagnostic quality” is compromised. This work aims to achieve “diagnostically lossless compression”, i.e., compression with no loss in visual quality nor diagnostic accuracy. Recent research by Koff et al. has shown that at higher compression levels lossy JPEG is more effective than JPEG2000 in some cases of brain and abdominal CT images. We have investigated the effects of the sharp skull edges in CT neuro images on JPEG and JPEG 2000 lossy compression. We provide an explanation why JPEG performs better than JPEG2000 for certain types of CT images. Another aspect of this study is primarily concerned with improved methods of assessing the diagnostic quality of compressed medical images. In this study, we have compared the performances of structural similarity (SSIM) index, mean squared error (MSE), compression ratio and JPEG quality factor, based on the data collected in a subjective experiment involving radiologists. An receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov analyses indicate that compression ratio is not always a good indicator of visual quality. Moreover, SSIM demonstrates the best performance. We have also shown that a weighted Youden index can provide SSIM and MSE thresholds for acceptable compression. We have also proposed two approaches of modifying L2-based approximations so that they conform to Weber’s model of perception. We show that the imposition of a condition of perceptual invariance in greyscale space according to Weber’s model leads to the unique (unnormalized) measure with density function ρ(t) = 1/t. This result implies that the logarithmic L1 distance is the most natural “Weberized” image metric. We provide numerical implementations of the intensity-weighted approximation methods for natural and medical images

    3D Medical Image Lossless Compressor Using Deep Learning Approaches

    Get PDF
    The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in efficient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies effectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an effective compression method. Since medical information is critical and imposes an influential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable efforts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information efficiently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the differences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the effectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by different hospitals, representing different body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are significantly more informative than others, specifically in medical domains where samples are available on a scale of billions. The effectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling

    Ensemble approach on enhanced compressed noise EEG data signal in wireless body area sensor network

    Get PDF
    The Wireless Body Area Sensor Network (WBASN) is used for communication among sensor nodes operating on or inside the human body in order to monitor vital body parameters and movements. One of the important applications of WBASN is patients’ healthcare monitoring of chronic diseases such as epileptic seizure. Normally, epileptic seizure data of the electroencephalograph (EEG) is captured and compressed in order to reduce its transmission time. However, at the same time, this contaminates the overall data and lowers classification accuracy. The current work also did not take into consideration that large size of collected EEG data. Consequently, EEG data is a bandwidth intensive. Hence, the main goal of this work is to design a unified compression and classification framework for delivery of EEG data in order to address its large size issue. EEG data is compressed in order to reduce its transmission time. However, at the same time, noise at the receiver side contaminates the overall data and lowers classification accuracy. Another goal is to reconstruct the compressed data and then recognize it. Therefore, a Noise Signal Combination (NSC) technique is proposed for the compression of the transmitted EEG data and enhancement of its classification accuracy at the receiving side in the presence of noise and incomplete data. The proposed framework combines compressive sensing and discrete cosine transform (DCT) in order to reduce the size of transmission data. Moreover, Gaussian noise model of the transmission channel is practically implemented to the framework. At the receiving side, the proposed NSC is designed based on weighted voting using four classification techniques. The accuracy of these techniques namely Artificial Neural Network, Naïve Bayes, k-Nearest Neighbour, and Support Victor Machine classifiers is fed to the proposed NSC. The experimental results showed that the proposed technique exceeds the conventional techniques by achieving the highest accuracy for noiseless and noisy data. Furthermore, the framework performs a significant role in reducing the size of data and classifying both noisy and noiseless data. The key contributions are the unified framework and proposed NSC, which improved accuracy of the noiseless and noisy EGG large data. The results have demonstrated the effectiveness of the proposed framework and provided several credible benefits including simplicity, and accuracy enhancement. Finally, the research improves clinical information about patients who not only suffer from epilepsy, but also neurological disorders, mental or physiological problems

    Advancements and Breakthroughs in Ultrasound Imaging

    Get PDF
    Ultrasonic imaging is a powerful diagnostic tool available to medical practitioners, engineers and researchers today. Due to the relative safety, and the non-invasive nature, ultrasonic imaging has become one of the most rapidly advancing technologies. These rapid advances are directly related to the parallel advancements in electronics, computing, and transducer technology together with sophisticated signal processing techniques. This book focuses on state of the art developments in ultrasonic imaging applications and underlying technologies presented by leading practitioners and researchers from many parts of the world

    Mathematics and Digital Signal Processing

    Get PDF
    Modern computer technology has opened up new opportunities for the development of digital signal processing methods. The applications of digital signal processing have expanded significantly and today include audio and speech processing, sonar, radar, and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, signal processing for telecommunications, control systems, biomedical engineering, and seismology, among others. This Special Issue is aimed at wide coverage of the problems of digital signal processing, from mathematical modeling to the implementation of problem-oriented systems. The basis of digital signal processing is digital filtering. Wavelet analysis implements multiscale signal processing and is used to solve applied problems of de-noising and compression. Processing of visual information, including image and video processing and pattern recognition, is actively used in robotic systems and industrial processes control today. Improving digital signal processing circuits and developing new signal processing systems can improve the technical characteristics of many digital devices. The development of new methods of artificial intelligence, including artificial neural networks and brain-computer interfaces, opens up new prospects for the creation of smart technology. This Special Issue contains the latest technological developments in mathematics and digital signal processing. The stated results are of interest to researchers in the field of applied mathematics and developers of modern digital signal processing systems
    corecore