1,081 research outputs found

    Discrete Frequency Warped Wavelets: Theory and Applications

    Get PDF

    Heart sound segmentation using signal processing methods

    Get PDF
    Cataloged from PDF version of article.Heart murmurs are pathological heart sounds that originate from blood flowing with abnormal turbulence due to physiological defects of the heart, and are the prime indicator of many heart-related diseases. Murmurs can be diagnosed via auscultation; that is, by listening with a stethoscope. However, manual detection and classification of murmur requires clinical expertise and is highly prone to misclassification. Although automated classification algorithms exist for this purpose; they heavily depend on feature extraction from ‘segmented’ heart sound waveforms. Segmentation in this context refers to detecting and splitting cardiac cycles. However, heart sound signal is not a stationary signal; and typically has a low signal-to-noise ratio, which makes it very difficult to segment using no external information but the signal itself. Most of the commercial systems require an external electrocardiography (ECG) signal to determine S1 and S2 peaks, but ECG is not as widely available as stethoscopes. Although algorithms that provide segmentation using sound alone exist, a proper comparison between these algorithms on a common dataset is missing. We propose several modifications to many of these algorithms, as well as an evaluation method that allows a unified comparison of all these approaches. We have tested each combination of algorithms on a real data set [1], which also provides manual annotations as ground truth. We also propose an ensemble of several methods, and a heuristic for which algorithm’s output to use. Whereas tested algorithms report up to 62% accuracy, our ensemble method reports a 75% success rate. Finally, we created a tool named UpBeat to enable manual segmentation of heart sounds, and construction of a ground truth dataset. UpBeat is a starting medium for auscultation segmentation, time-domain based feature extraction and evaluation; which has automatic segmentation capabilities, as well as a minimalistic drag-and-drop interface which allows manual annotation of S1 and S2 peaks.Şahin, DevrimM.S

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1
    corecore