798 research outputs found

    Lossless and low-cost integer-based lifting wavelet transform

    Get PDF
    Discrete wavelet transform (DWT) is a powerful tool for analyzing real-time signals, including aperiodic, irregular, noisy, and transient data, because of its capability to explore signals in both the frequency- and time-domain in different resolutions. For this reason, they are used extensively in a wide number of applications in image and signal processing. Despite the wide usage, the implementation of the wavelet transform is usually lossy or computationally complex, and it requires expensive hardware. However, in many applications, such as medical diagnosis, reversible data-hiding, and critical satellite data, lossless implementation of the wavelet transform is desirable. It is also important to have more hardware-friendly implementations due to its recent inclusion in signal processing modules in system-on-chips (SoCs). To address the need, this research work provides a generalized implementation of a wavelet transform using an integer-based lifting method to produce lossless and low-cost architecture while maintaining the performance close to the original wavelets. In order to achieve a general implementation method for all orthogonal and biorthogonal wavelets, the Daubechies wavelet family has been utilized at first since it is one of the most widely used wavelets and based on a systematic method of construction of compact support orthogonal wavelets. Though the first two phases of this work are for Daubechies wavelets, they can be generalized in order to apply to other wavelets as well. Subsequently, some techniques used in the primary works have been adopted and the critical issues for achieving general lossless implementation have solved to propose a general lossless method. The research work presented here can be divided into several phases. In the first phase, low-cost architectures of the Daubechies-4 (D4) and Daubechies-6 (D6) wavelets have been derived by applying the integer-polynomial mapping. A lifting architecture has been used which reduces the cost by a half compared to the conventional convolution-based approach. The application of integer-polynomial mapping (IPM) of the polynomial filter coefficient with a floating-point value further decreases the complexity and reduces the loss in signal reconstruction. Also, the “resource sharing” between lifting steps results in a further reduction in implementation costs and near-lossless data reconstruction. In the second phase, a completely lossless or error-free architecture has been proposed for the Daubechies-8 (D8) wavelet. Several lifting variants have been derived for the same wavelet, the integer mapping has been applied, and the best variant is determined in terms of performance, using entropy and transform coding gain. Then a theory has been derived regarding the impact of scaling steps on the transform coding gain (GT). The approach results in the lowest cost lossless architecture of the D8 in the literature, to the best of our knowledge. The proposed approach may be applied to other orthogonal wavelets, including biorthogonal ones to achieve higher performance. In the final phase, a general algorithm has been proposed to implement the original filter coefficients expressed by a polyphase matrix into a more efficient lifting structure. This is done by using modified factorization, so that the factorized polyphase matrix does not include the lossy scaling step like the conventional lifting method. This general technique has been applied on some widely used orthogonal and biorthogonal wavelets and its advantages have been discussed. Since the discrete wavelet transform is used in a vast number of applications, the proposed algorithms can be utilized in those cases to achieve lossless, low-cost, and hardware-friendly architectures

    Classification of sporting activities using smartphone accelerometers

    Get PDF
    In this paper we present a framework that allows for the automatic identification of sporting activities using commonly available smartphones. We extract discriminative informational features from smartphone accelerometers using the Discrete Wavelet Transform (DWT). Despite the poor quality of their accelerometers, smartphones were used as capture devices due to their prevalence in today’s society. Successful classification on this basis potentially makes the technology accessible to both elite and non-elite athletes. Extracted features are used to train different categories of classifiers. No one classifier family has a reportable direct advantage in activity classification problems to date; thus we examine classifiers from each of the most widely used classifier families. We investigate three classification approaches; a commonly used SVM-based approach, an optimized classification model and a fusion of classifiers. We also investigate the effect of changing several of the DWT input parameters, including mother wavelets, window lengths and DWT decomposition levels. During the course of this work we created a challenging sports activity analysis dataset, comprised of soccer and field-hockey activities. The average maximum F-measure accuracy of 87% was achieved using a fusion of classifiers, which was 6% better than a single classifier model and 23% better than a standard SVM approach

    Multi-Sensor Image Registration, Fusion and Dimension Reduction

    Get PDF
    With the development of future spacecraft formations comes a number of complex challenges such as maintaining precise relative position and specified attitudes, as well as being able to communicate with each other. More generally, with the advent of spacecraft formations, issues related to performing on-board and automatic data computing and analysis as well as decision planning and scheduling will figure among the most important requirements. Among those, automatic image registration, image fusion and dimension reduction represent intelligent technologies that would reduce mission costs,would enable autonomous decisions to be taken on-board, and would make formation flying adaptive, self-reliant, and cooperative. For both on-board and on-the-ground applications, the particular need for dimension reduction is two-fold, first to reduce the communication bandwidth, second as a pre-processing to make computations feasible,simpler and faster

    Impact of Wavelet Kernels on Predictive Capability of Radiomic Features: A Case Study on COVID-19 Chest X-ray Images

    Get PDF
    Radiomic analysis allows for the detection of imaging biomarkers supporting decision-making processes in clinical environments, from diagnosis to prognosis. Frequently, the original set of radiomic features is augmented by considering high-level features, such as wavelet transforms. However, several wavelets families (so called kernels) are able to generate different multi-resolution representations of the original image, and which of them produces more salient images is not yet clear. In this study, an in-depth analysis is performed by comparing different wavelet kernels and by evaluating their impact on predictive capabilities of radiomic models. A dataset composed of 1589 chest X-ray images was used for COVID-19 prognosis prediction as a case study. Random forest, support vector machine, and XGBoost were trained (on a subset of 1103 images) after a rigorous feature selection strategy to build-up the predictive models. Next, to evaluate the models generalization capability on unseen data, a test phase was performed (on a subset of 486 images). The experimental findings showed that Bior1.5, Coif1, Haar, and Sym2 kernels guarantee better and similar performance for all three machine learning models considered. Support vector machine and random forest showed comparable performance, and they were better than XGBoost. Additionally, random forest proved to be the most stable model, ensuring an appropriate balance between sensitivity and specificity

    Ensemble of classifiers based data fusion of EEG and MRI for diagnosis of neurodegenerative disorders

    Get PDF
    The prevalence of Alzheimer\u27s disease (AD), Parkinson\u27s disease (PD), and mild cognitive impairment (MCI) are rising at an alarming rate as the average age of the population increases, especially in developing nations. The efficacy of the new medical treatments critically depends on the ability to diagnose these diseases at the earliest stages. To facilitate the availability of early diagnosis in community hospitals, an accurate, inexpensive, and noninvasive diagnostic tool must be made available. As biomarkers, the event related potentials (ERP) of the electroencephalogram (EEG) - which has previously shown promise in automated diagnosis - in addition to volumetric magnetic resonance imaging (MRI), are relatively low cost and readily available tools that can be used as an automated diagnosis tool. 16-electrode EEG data were collected from 175 subjects afflicted with Alzheimer\u27s disease, Parkinson\u27s disease, mild cognitive impairment, as well as non-disease (normal control) subjects. T2 weighted MRI volumetric data were also collected from 161 of these subjects. Feature extraction methods were used to separate diagnostic information from the raw data. The EEG signals were decomposed using the discrete wavelet transform in order to isolate informative frequency bands. The MR images were processed through segmentation software to provide volumetric data of various brain regions in order to quantize potential brain tissue atrophy. Both of these data sources were utilized in a pattern recognition based classification algorithm to serve as a diagnostic tool for Alzheimer\u27s and Parkinson\u27s disease. Support vector machine and multilayer perceptron classifiers were used to create a classification algorithm trained with the EEG and MRI data. Extracted features were used to train individual classifiers, each learning a particular subset of the training data, whose decisions were combined using decision level fusion. Additionally, a severity analysis was performed to diagnose between various stages of AD as well as a cognitively normal state. The study found that EEG and MRI data hold complimentary information for the diagnosis of AD as well as PD. The use of both data types with a decision level fusion improves diagnostic accuracy over the diagnostic accuracy of each individual data source. In the case of AD only diagnosis, ERP data only provided a 78% diagnostic performance, MRI alone was 89% and ERP and MRI combined was 94%. For PD only diagnosis, ERP only performance was 67%, MRI only was 70%, and combined performance was 78%. MCI only diagnosis exhibited a similar effect with a 71% ERP performance, 82% MRI performance, and 85% combined performance. Diagnosis among three subject groups showed the same trend. For PD, AD, and normal diagnosis ERP only performance was 43%, MRI only was 66%, and combined performance was 71%. The severity analysis for mild AD, severe AD, and normal subjects showed the same combined effect

    Lifting dual tree complex wavelets transform

    Get PDF
    We describe the lifting dual tree complex wavelet transform (LDTCWT), a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). We describe a way to estimate the accuracy of this approximation and style appropriate filters to attain this. These benefits are often exploited among applications like denoising, segmentation, image fusion and compression. The results of applications shrinkage denoising demonstrate objective and subjective enhancements over the dual tree complex wavelet transform (DTCWT). The results of the shrinkage denoising example application indicate empirical and subjective enhancements over the DTCWT. The new transform with the DTCWT provide a trade-off between denoising computational competence of performance, and memory necessities. We tend to use the PSNR (peak signal to noise ratio) alongside the structural similarity index measure (SSIM) and the SSIM map to estimate denoised image quality

    Analysis of Different Filters for Image Despeckling : A Review

    Get PDF
    Digital image acquisition and processing in clinical diagnosis plays a significant part. Medical images at the time of acquisition can be corrupted via noise. Removal of this noise from images is a challenging problem. The presence of signal dependent noise is referred as speckle which degrades the actual quality of an image. Considering, several techniques have been developed focused on speckle noise reduction. The primary purpose of these techniques was to improve visualization of an image followed by preprocessing step for segmentation, feature extraction and registration. The scope of this paper is to provide an overview of despeckling techniques

    Wavelet-Based Kernel Construction for Heart Disease Classification

    Get PDF
    © 2019 ADVANCES IN ELECTRICAL AND ELECTRONIC ENGINEERINGHeart disease classification plays an important role in clinical diagnoses. The performance improvement of an Electrocardiogram classifier is therefore of great relevance, but it is a challenging task too. This paper proposes a novel classification algorithm using the kernel method. A kernel is constructed based on wavelet coefficients of heartbeat signals for a classifier with high performance. In particular, a wavelet packet decomposition algorithm is applied to heartbeat signals to obtain the Approximation and Detail coefficients, which are used to calculate the parameters of the kernel. A principal component analysis algorithm with the wavelet-based kernel is employed to choose the main features of the heartbeat signals for the input of the classifier. In addition, a neural network with three hidden layers in the classifier is utilized for classifying five types of heart disease. The electrocardiogram signals in nine patients obtained from the MIT-BIH database are used to test the proposed classifier. In order to evaluate the performance of the classifier, a multi-class confusion matrix is applied to produce the performance indexes, including the Accuracy, Recall, Precision, and F1 score. The experimental results show that the proposed method gives good results for the classification of the five mentioned types of heart disease.Peer reviewedFinal Published versio
    corecore