14 research outputs found
Gabor frames for classification of paroxysmal and persistent atrial fibrillation episodes
[EN] In this study, we propose a new classification method for early differentiation of paroxysmal and persistent atrial fibrillation episodes, i.e. those which spontaneously or with external intervention will return to sinus rhythm within 7 days of onset from the ones where the arrhythmia is sustained for more than 7 days. Today, clinicians provide patients classification once the course of the arrhythmia has been disclosed. This classification problem is dealt with in this study. We study a sparse representation of surface electrocardiogram signals by means of Gabor frames and afterwards we apply a linear discriminant analysis. Thus, we provide an early discrimination, obtaining promising performances on a heterogeneous cohort of patients in terms of pharmacological treatment and state of progression of the arrhythmia: 95% sensitivity, 82% specificity, 89% accuracy. In this manner, the proposed method can help clinicians to choose the most appropriate treatment using the electrocardiogram, which is a widely available and non-invasive technique. This early differentiation is clinically highly significant in order to choose optimal patients who may undergo catheter ablation with higher success rates. (C) 2016 IPEM. Published by Elsevier Ltd. All rights reserved.This work was supported by Generalitat Valenciana under Grant PrometeoII/2013/013, and by MINECO under Grants MTM2013-43540-P and MTM2016-76647-P.Ortigosa, N.; Galbis Verdu, A.; Fernández, C.; Cano-Pérez, Ó. (2017). Gabor frames for classification of paroxysmal and persistent atrial fibrillation episodes. Medical Engineering & Physics. 39:31-37. https://doi.org/10.1016/j.medengphy.2016.10.013S31373
High-rate compression of ECG signals by an accuracy-driven sparsity model relying on natural basis
Long duration recordings of ECG signals require high compression ratios, in particular when storing on portable devices. Most of the ECG compression methods in literature are based on wavelet transform while only few of them rely on sparsity promotion models. In this paper we propose a novel ECG signal compression framework based on sparse representation using a set of ECG segments as natural basis. This approach exploits the signal regularity, i.e. the repetition of common patterns, in order to achieve high compression ratio (CR). We apply k-LiMapS as fine-tuned sparsity solver algorithm guaranteeing the required signal reconstruction quality PRDN (Normalized Percentage Root-mean-square Difference). Extensive experiments have been conducted on all the 48 records of MIT-BIH Arrhythmia Database and on some 24 hour records from the Long-Term ST Database. Direct comparisons of our method with several state-of-the-art ECG compression methods (namely ARLE, Rajoub's, SPIHT, TRE) prove its effectiveness. Our method achieves average performances that are two-three times higher than those obtained by the other assessed methods. In particular the compression ratio gap between our method and the others increases with growing PRDN
SPARSE RECOVERY BY NONCONVEX LIPSHITZIAN MAPPINGS
In recent years, the sparsity concept has attracted considerable attention in areas of applied mathematics and computer science, especially in signal and image processing fields. The general framework of sparse representation is now a mature concept with solid basis in relevant mathematical fields, such as probability, geometry of Banach spaces, harmonic analysis, theory of computability, and information-based complexity. Together with theoretical and practical advancements, also several numeric methods and algorithmic techniques have been developed in order to capture the complexity and the wide scope that the theory suggests. Sparse recovery relays over the fact that many signals can be represented in a sparse way, using only few nonzero coefficients in a suitable basis or overcomplete dictionary. Unfortunately, this problem, also called `0-norm minimization, is not only NP-hard, but also hard to approximate within an exponential factor of the optimal
solution. Nevertheless, many heuristics for the problem has been obtained and proposed for many applications. This thesis provides new regularization methods for the sparse representation problem with application to face recognition and ECG signal compression. The proposed methods are based on fixed-point iteration scheme which combines nonconvex Lipschitzian-type mappings with canonical orthogonal projectors. The first are aimed at uniformly enhancing the sparseness level by shrinking effects, the latter to project back into the feasible space of solutions. In the second part of this thesis we study two applications in which sparseness has been successfully applied in recent areas of the signal and image processing: the face recognition problem and the ECG signal compression problem
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Intelligent Biosignal Processing in Wearable and Implantable Sensors
This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine
Motion Artifact Processing Techniques for Physiological Signals
The combination of reducing birth rate and increasing life expectancy continues to drive
the demographic shift toward an ageing population and this is placing an ever-increasing
burden on our healthcare systems. The urgent need to address this so called healthcare
\time bomb" has led to a rapid growth in research into ubiquitous, pervasive and
distributed healthcare technologies where recent advances in signal acquisition, data
storage and communication are helping such systems become a reality. However, similar
to recordings performed in the hospital environment, artifacts continue to be a major
issue for these systems. The magnitude and frequency of artifacts can vary signicantly
depending on the recording environment with one of the major contributions due to
the motion of the subject or the recording transducer. As such, this thesis addresses
the challenges of the removal of this motion artifact removal from various physiological
signals.
The preliminary investigations focus on artifact identication and the tagging of physiological
signals streams with measures of signal quality. A new method for quantifying
signal quality is developed based on the use of inexpensive accelerometers which facilitates
the appropriate use of artifact processing methods as needed. These artifact
processing methods are thoroughly examined as part of a comprehensive review of the
most commonly applicable methods. This review forms the basis for the comparative
studies subsequently presented. Then, a simple but novel experimental methodology
for the comparison of artifact processing techniques is proposed, designed and tested
for algorithm evaluation. The method is demonstrated to be highly eective for the
type of artifact challenges common in a connected health setting, particularly those concerned
with brain activity monitoring. This research primarily focuses on applying the
techniques to functional near infrared spectroscopy (fNIRS) and electroencephalography
(EEG) data due to their high susceptibility to contamination by subject motion related
artifact.
Using the novel experimental methodology, complemented with simulated data, a comprehensive
comparison of a range of artifact processing methods is conducted, allowing
the identication of the set of the best performing methods. A novel artifact removal
technique is also developed, namely ensemble empirical mode decomposition with canonical
correlation analysis (EEMD-CCA), which provides the best results when applied on
fNIRS data under particular conditions. Four of the best performing techniques were
then tested on real ambulatory EEG data contaminated with movement artifacts comparable
to those observed during in-home monitoring.
It was determined that when analysing EEG data, the Wiener lter is consistently
the best performing artifact removal technique. However, when employing the fNIRS
data, the best technique depends on a number of factors including: 1) the availability
of a reference signal and 2) whether or not the form of the artifact is known. It is
envisaged that the use of physiological signal monitoring for patient healthcare will grow
signicantly over the next number of decades and it is hoped that this thesis will aid in
the progression and development of artifact removal techniques capable of supporting
this growth
Recommended from our members
Optimised use of independent component analysis for EEG signal processing
Electroencephalography (EEG) is the prevalent technique for monitoring brain function. It employs a set of electrodes on the scalp to measure the electrical activity of the brain. EEG is mainly used by researchers to study the brain’s responses to a specific stimulus - the event-related potentials (ERPs). Different types of unwanted signals, which are known as artefacts, usually mix with the EEG at any point during the recording process. As the amplitudes of the EEG and ERPs are very small (in the order of microvolts), they can be buried in the artefacts which have very high amplitudes in the order of millivolts. Therefore, contamination of EEG activity by the artefacts can degrade the quality of the EEG recording and may cause error in EEG/ERP signal interpretation. Several EEG artefact removal methods already exist in the literature and these previous studies have concentrated on manual or automatic detection of either one or, of a few types of EEG artefacts. Among the proposed methods, Independent Component Analysis (ICA) based techniques are commonly applied to successfully detect the artefacts. Different types of ICA algorithms have been developed, which aim to estimate the individual sources of a linearly mixed signal. However, the estimation criterion differs across various ICA algorithms, which may deliver different results