47 research outputs found

    Revisiting QRS detection methodologies for portable, wearable, battery-operated, and wireless ECG systems

    Get PDF
    Cardiovascular diseases are the number one cause of death worldwide. Currently, portable battery-operated systems such as mobile phones with wireless ECG sensors have the potential to be used in continuous cardiac function assessment that can be easily integrated into daily life. These portable point-of-care diagnostic systems can therefore help unveil and treat cardiovascular diseases. The basis for ECG analysis is a robust detection of the prominent QRS complex, as well as other ECG signal characteristics. However, it is not clear from the literature which ECG analysis algorithms are suited for an implementation on a mobile device. We investigate current QRS detection algorithms based on three assessment criteria: 1) robustness to noise, 2) parameter choice, and 3) numerical efficiency, in order to target a universal fast-robust detector. Furthermore, existing QRS detection algorithms may provide an acceptable solution only on small segments of ECG signals, within a certain amplitude range, or amid particular types of arrhythmia and/or noise. These issues are discussed in the context of a comparison with the most conventional algorithms, followed by future recommendations for developing reliable QRS detection schemes suitable for implementation on battery-operated mobile devices.Mohamed Elgendi, Björn Eskofier, Socrates Dokos, Derek Abbot

    Acoustic sensing as a novel approach for cardiovascular monitoring at the wrist

    Get PDF
    Cardiovascular diseases are the number one cause of deaths globally. An increased cardiovascular risk can be detected by a regular monitoring of the vital signs including the heart rate, the heart rate variability (HRV) and the blood pressure. For a user to undergo continuous vital sign monitoring, wearable systems prove to be very useful as the device can be integrated into the user's lifestyle without affecting the daily activities. However, the main challenge associated with the monitoring of these cardiovascular parameters is the requirement of different sensing mechanisms at different measurement sites. There is not a single wearable device that can provide sufficient physiological information to track the vital signs from a single site on the body. This thesis proposes a novel concept of using acoustic sensing over the radial artery to extract cardiac parameters for vital sign monitoring. A wearable system consisting of a microphone is designed to allow the detection of the heart sounds together with the pulse wave, an attribute not possible with existing wrist-based sensing methods. Methods: The acoustic signals recorded from the radial artery are a continuous reflection of the instantaneous cardiac activity. These signals are studied and characterised using different algorithms to extract cardiovascular parameters. The validity of the proposed principle is firstly demonstrated using a novel algorithm to extract the heart rate from these signals. The algorithm utilises the power spectral analysis of the acoustic pulse signal to detect the S1 sounds and additionally, the K-means method to remove motion artifacts for an accurate heartbeat detection. The HRV in the short-term acoustic recordings is found by extracting the S1 events using the relative information between the short- and long-term energies of the signal. The S1 events are localised using three different characteristic points and the best representation is found by comparing the instantaneous heart rate profiles. The possibility of measuring the blood pressure using the wearable device is shown by recording the acoustic signal under the influence of external pressure applied on the arterial branch. The temporal and spectral characteristics of the acoustic signal are utilised to extract the feature signals and obtain a relationship with the systolic blood pressure (SBP) and diastolic blood pressure (DBP) respectively. Results: This thesis proposes three different algorithms to find the heart rate, the HRV and the SBP/ DBP readings from the acoustic signals recorded at the wrist. The results obtained by each algorithm are as follows: 1. The heart rate algorithm is validated on a dataset consisting of 12 subjects with a data length of 6 hours. The results demonstrate an accuracy of 98.78%, mean absolute error of 0.28 bpm, limits of agreement between -1.68 and 1.69 bpm, and a correlation coefficient of 0.998 with reference to a state-of-the-art PPG-based commercial device. A high statistical agreement between the heart rate obtained from the acoustic signal and the photoplethysmography (PPG) signal is observed. 2. The HRV algorithm is validated on the short-term acoustic signals of 5-minutes duration recorded from each of the 12 subjects. A comparison is established with the simultaneously recorded electrocardiography (ECG) and PPG signals respectively. The instantaneous heart rate for all the subjects combined together achieves an accuracy of 98.50% and 98.96% with respect to the ECG and PPG signals respectively. The results for the time-domain and frequency-domain HRV parameters also demonstrate high statistical agreement with the ECG and PPG signals respectively. 3. The algorithm proposed for the SBP/ DBP determination is validated on 104 acoustic signals recorded from 40 adult subjects. The experimental outputs when compared with the reference arm- and wrist-based monitors produce a mean error of less than 2 mmHg and a standard deviation of error around 6 mmHg. Based on these results, this thesis shows the potential of this new sensing modality to be used as an alternative, or to complement existing methods, for the continuous monitoring of heart rate and HRV, and spot measurement of the blood pressure at the wrist.Open Acces

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    The Affine Uncertainty Principle, Associated Frames and Applications in Signal Processing

    Get PDF
    Uncertainty relations play a prominent role in signal processing, stating that a signal can not be simultaneously concentrated in the two related domains of the corresponding phase space. In particular, a new uncertainty principle for the affine group, which is directly related to the wavelet transform has lead to a new minimizing waveform. In this thesis, a frame construction is proposed which leads to approximately tight frames based on this minimizing waveform. Frame properties such as the diagonality of the frame operator as well as lower and upper frame bounds are analyzed. Additionally, three applications of such frame constructions are introduced: inpainting of missing audio data, detection of neuronal spikes in extracellular recorded data and peak detection in MALDI imaging data

    Développement d'une nouvelle technique de pointé automatique pour les données de sismique réfraction

    Get PDF
    Accurate picking of first arrival times plays an important role in many seismic studies, particularly in seismic tomography and reservoirs or aquifers monitoring. A new adaptive algorithm has been developed based on combining three picking methods (Multi-Nested Windows, Higher Order Statistics and Akaike Information Criterion). It exploits the benefits of integrating three properties (energy, gaussianity, and stationarity), which reveal the presence of first arrivals. Since time uncertainties estimating is of crucial importance for seismic tomography, the developed algorithm provides automatically the associated errors of picked arrival times. The comparison of resulting arrival times with those picked manually, and with other algorithms of automatic picking, demonstrates the reliable performance of this algorithm. It is nearly a parameter-free algorithm, which is straightforward to implement and demands low computational resources. However, high noise level in the seismic records declines the efficiency of the developed algorithm. To improve the signal-to-noise ratio of first arrivals, and thereby to increase their detectability, double stacking in the time domain has been proposed. This approach is based on the key principle of the local similarity of stacked traces. The results demonstrate the feasibility of applying the double stacking before the automatic picking.Un pointé précis des temps de premières arrivées sismiques joue un rôle important dans de nombreuses études d’imagerie sismique. Un nouvel algorithme adaptif est développé combinant trois approches associant l’utilisation de fenêtres multiples imbriquées, l’estimation des propriétés statistiques d’ordre supérieur et le critère d’information d’Akaike. L’algorithme a l’avantage d’intégrer plusieurs propriétés (l’énergie, la gaussianité, et la stationnarité) dévoilant la présence des premières arrivées. Tandis que les incertitudes de pointés ont, dans certains cas, d’importance équivalente aux pointés eux-mêmes, l’algorithme fournit aussi automatiquement une estimation sur leur incertitudes. La précision et la fiabilité de cet algorithme sont évaluées en comparant les résultats issus de ce dernier avec ceux provenant d’un pointé manuel, ainsi que d’autres pointeurs automatiques. Cet algorithme est simple à mettre en œuvre et ne nécessite pas de grandes performances informatiques. Cependant, la présence de bruit dans les données peut en dégrader la performance. Une double sommation dans le domaine temporel est alors proposée afin d’améliorer la détectabilité des premières arrivées. Ce processus est fondé sur un principe clé : la ressemblance locale entre les traces stackées. Les résultats montrent l’intérêt qu’il y a à appliquer cette sommation avant de réaliser le pointé automatique

    Space adaptive and hierarchical Bayesian variational models for image restoration

    Get PDF
    The main contribution of this thesis is the proposal of novel space-variant regularization or penalty terms motivated by a strong statistical rational. In light of the connection between the classical variational framework and the Bayesian formulation, we will focus on the design of highly flexible priors characterized by a large number of unknown parameters. The latter will be automatically estimated by setting up a hierarchical modeling framework, i.e. introducing informative or non-informative hyperpriors depending on the information at hand on the parameters. More specifically, in the first part of the thesis we will focus on the restoration of natural images, by introducing highly parametrized distribution to model the local behavior of the gradients in the image. The resulting regularizers hold the potential to adapt to the local smoothness, directionality and sparsity in the data. The estimation of the unknown parameters will be addressed by means of non-informative hyperpriors, namely uniform distributions over the parameter domain, thus leading to the classical Maximum Likelihood approach. In the second part of the thesis, we will address the problem of designing suitable penalty terms for the recovery of sparse signals. The space-variance in the proposed penalties, corresponding to a family of informative hyperpriors, namely generalized gamma hyperpriors, will follow directly from the assumption of the independence of the components in the signal. The study of the properties of the resulting energy functionals will thus lead to the introduction of two hybrid algorithms, aimed at combining the strong sparsity promotion characterizing non-convex penalty terms with the desirable guarantees of convex optimization

    Sensing and Compression Techniques for Environmental and Human Sensing Applications

    Get PDF
    In this doctoral thesis, we devise and evaluate a variety of lossy compression schemes for Internet of Things (IoT) devices such as those utilized in environmental wireless sensor networks (WSNs) and Body Sensor Networks (BSNs). We are especially concerned with the efficient acquisition of the data sensed by these systems and to this end we advocate the use of joint (lossy) compression and transmission techniques. Environmental WSNs are considered first. For these, we present an original compressive sensing (CS) approach for the spatio-temporal compression of data. In detail, we consider temporal compression schemes based on linear approximations as well as Fourier transforms, whereas spatial and/or temporal dynamics are exploited through compression algorithms based on distributed source coding (DSC) and several algorithms based on compressive sensing (CS). To the best of our knowledge, this is the first work presenting a systematic performance evaluation of these (different) lossy compression approaches. The selected algorithms are framed within the same system model, and a comparative performance assessment is carried out, evaluating their energy consumption vs the attainable compression ratio. Hence, as a further main contribution of this thesis, we design and validate a novel CS-based compression scheme, termed covariogram-based compressive sensing (CB-CS), which combines a new sampling mechanism along with an original covariogram-based approach for the online estimation of the covariance structure of the signal. As a second main research topic, we focus on modern wearable IoT devices which enable the monitoring of vital parameters such as heart or respiratory rates (RESP), electrocardiography (ECG), and photo-plethysmographic (PPG) signals within e-health applications. These devices are battery operated and communicate the vital signs they gather through a wireless communication interface. A common issue of this technology is that signal transmission is often power-demanding and this poses serious limitations to the continuous monitoring of biometric signals. To ameliorate this, we advocate the use of lossy signal compression at the source: this considerably reduces the size of the data that has to be sent to the acquisition point by, in turn, boosting the battery life of the wearables and allowing for fine-grained and long-term monitoring. Considering one dimensional biosignals such as ECG, RESP and PPG, which are often available from commercial wearable devices, we first provide a throughout review of existing compression algorithms. Hence, we present novel approaches based on online dictionaries, elucidating their operating principles and providing a quantitative assessment of compression, reconstruction and energy consumption performance of all schemes. As part of this first investigation, dictionaries are built using a suboptimal but lightweight, online and best effort algorithm. Surprisingly, the obtained compression scheme is found to be very effective both in terms of compression efficiencies and reconstruction accuracy at the receiver. This approach is however not yet amenable to its practical implementation as its memory usage is rather high. Also, our systematic performance assessment reveals that the most efficient compression algorithms allow reductions in the signal size of up to 100 times, which entail similar reductions in the energy demand, by still keeping the reconstruction error within 4 % of the peak-to-peak signal amplitude. Based on what we have learned from this first comparison, we finally propose a new subject-specific compression technique called SURF Subject-adpative Unsupervised ecg compressor for weaRable Fitness monitors. In SURF, dictionaries are learned and maintained using suitable neural network structures. Specifically, learning is achieve through the use of neural maps such as self organizing maps and growing neural gas networks, in a totally unsupervised manner and adapting the dictionaries to the signal statistics of the wearer. As our results show, SURF: i) reaches high compression efficiencies (reduction in the signal size of up to 96 times), ii) allows for reconstruction errors well below 4 % (peak-to-peak RMSE, errors of 2 % are generally achievable), iii) gracefully adapts to changing signal statistics due to switching to a new subject or changing their activity, iv) has low memory requirements (lower than 50 kbytes) and v) allows for further reduction in the total energy consumption (processing plus transmission). These facts makes SURF a very promising algorithm, delivering the best performance among all the solutions proposed so far

    Stochastic Optimization and Machine Learning Modeling for Wireless Networking

    Get PDF
    In the last years, the telecommunications industry has seen an increasing interest in the development of advanced solutions that enable communicating nodes to exchange large amounts of data. Indeed, well-known applications such as VoIP, audio streaming, video on demand, real-time surveillance systems, safety vehicular requirements, and remote computing have increased the demand for the efficient generation, utilization, management and communication of larger and larger data quantities. New transmission technologies have been developed to permit more efficient and faster data exchanges, including multiple input multiple output architectures or software defined networking: as an example, the next generation of mobile communication, known as 5G, is expected to provide data rates of tens of megabits per second for tens of thousands of users and only 1 ms latency. In order to achieve such demanding performance, these systems need to effectively model the considerable level of uncertainty related to fading transmission channels, interference, or the presence of noise in the data. In this thesis, we will present how different approaches can be adopted to model these kinds of scenarios, focusing on wireless networking applications. In particular, the first part of this work will show how stochastic optimization models can be exploited to design energy management policies for wireless sensor networks. Traditionally, transmission policies are designed to reduce the total amount of energy drawn from the batteries of the devices; here, we consider energy harvesting wireless sensor networks, in which each device is able to scavenge energy from the environment and charge its battery with it. In this case, the goal of the optimal transmission policies is to efficiently manage the energy harvested from the environment, avoiding both energy outage (i.e., no residual energy in a battery) and energy overflow (i.e., the impossibility to store scavenged energy when the battery is already full). In the second part of this work, we will explore the adoption of machine learning techniques to tackle a number of common wireless networking problems. These algorithms are able to learn from and make predictions on data, avoiding the need to follow limited static program instructions: models are built from sample inputs, thus allowing for data-driven predictions and decisions. In particular, we will first design an on-the-fly prediction algorithm for the expected time of arrival related to WiFi transmissions. This predictor only exploits those network parameters available at each receiving node and does not require additional knowledge from the transmitter, hence it can be deployed without modifying existing standard transmission protocols. Secondly, we will investigate the usage of particular neural network instances known as autoencoders for the compression of biosignals, such as electrocardiography and photo plethysmographic sequences. A lightweight lossy compressor will be designed, able to be deployed in wearable battery-equipped devices with limited computational power. Thirdly, we will propose a predictor for the long-term channel gain in a wireless network. Differently from other works in the literature, such predictor will only exploit past channel samples, without resorting to additional information such as GPS data. An accurate estimation of this gain would enable to, e.g., efficiently allocate resources and foretell future handover procedures. Finally, although not strictly related to wireless networking scenarios, we will show how deep learning techniques can be applied to the field of autonomous driving. This final section will deal with state-of-the-art machine learning solutions, proving how these techniques are able to considerably overcome the performance given by traditional approaches
    corecore