386 research outputs found

    Biometric signals compression with time- and subject-adaptive dictionary for wearable devices

    Get PDF
    This thesis work is dedicated to the design of a lightweight compression technique for the real-time processing of biomedical signals in wearable devices. The proposed approach exploits the unsupervised learning algorithm of the time-adaptive self-organizing map (TASOM) to create a subject-adaptive codebook applied to the vector quantization of a signal. The codebook is obtained and then dynamically refined in an online fashion, without requiring any prior information on the signal itsel

    A Novel Respiratory Rate Estimation Algorithm from Photoplethysmogram Using Deep Learning Model

    Get PDF
    Respiratory rate (RR) is a critical vital sign that can provide valuable insights into various medical conditions, including pneumonia. Unfortunately, manual RR counting is often unreliable and discontinuous. Current RR estimation algorithms either lack the necessary accuracy or demand extensive window sizes. In response to these challenges, this study introduces a novel method for continuously estimating RR from photoplethysmogram (PPG) with a reduced window size and lower processing requirements. To evaluate and compare classical and deep learning algorithms, this study leverages the BIDMC and CapnoBase datasets, employing the Respiratory Rate Estimation (RRest) toolbox. The optimal classical techniques combination on the BIDMC datasets achieves a mean absolute error (MAE) of 1.9 breaths/min. Additionally, the developed neural network model utilises convolutional and long short-term memory layers to estimate RR effectively. The best-performing model, with a 50% train–test split and a window size of 7 s, achieves an MAE of 2 breaths/min. Furthermore, compared to other deep learning algorithms with window sizes of 16, 32, and 64 s, this study’s model demonstrates superior performance with a smaller window size. The study suggests that further research into more precise signal processing techniques may enhance RR estimation from PPG signals

    Classification of time series patterns from complex dynamic systems

    Full text link

    EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design

    Get PDF
    The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    A Novel Systolic Parallel Hardware Architecture for the FPGA Acceleration of Feedforward Neural Networks

    Get PDF
    New chips for machine learning applications appear, they are tuned for a specific topology, being efficient by using highly parallel designs at the cost of high power or large complex devices. However, the computational demands of deep neural networks require flexible and efficient hardware architectures able to fit different applications, neural network types, number of inputs, outputs, layers, and units in each layer, making the migration from software to hardware easy. This paper describes novel hardware implementing any feedforward neural network (FFNN): multilayer perceptron, autoencoder, and logistic regression. The architecture admits an arbitrary input and output number, units in layers, and a number of layers. The hardware combines matrix algebra concepts with serial-parallel computation. It is based on a systolic ring of neural processing elements (NPE), only requiring as many NPEs as neuron units in the largest layer, no matter the number of layers. The use of resources grows linearly with the number of NPEs. This versatile architecture serves as an accelerator in real-time applications and its size does not affect the system clock frequency. Unlike most approaches, a single activation function block (AFB) for the whole FFNN is required. Performance, resource usage, and accuracy for several network topologies and activation functions are evaluated. The architecture reaches 550 MHz clock speed in a Virtex7 FPGA. The proposed implementation uses 18-bit fixed point achieving similar classification performance to a floating point approach. A reduced weight bit size does not affect the accuracy, allowing more weights in the same memory. Different FFNN for Iris and MNIST datasets were evaluated and, for a real-time application of abnormal cardiac detection, a x256 acceleration was achieved. The proposed architecture can perform up to 1980 Giga operations per second (GOPS), implementing the multilayer FFNN of up to 3600 neurons per layer in a single chip. The architecture can be extended to bigger capacity devices or multi-chip by the simple NPE ring extension

    Data analysis and machine learning approaches for time series pre- and post- processing pipelines

    Get PDF
    157 p.En el ámbito industrial, las series temporales suelen generarse de forma continua mediante sensores quecaptan y supervisan constantemente el funcionamiento de las máquinas en tiempo real. Por ello, esimportante que los algoritmos de limpieza admitan un funcionamiento casi en tiempo real. Además, amedida que los datos evolución, la estrategia de limpieza debe cambiar de forma adaptativa eincremental, para evitar tener que empezar el proceso de limpieza desde cero cada vez.El objetivo de esta tesis es comprobar la posibilidad de aplicar flujos de aprendizaje automática a lasetapas de preprocesamiento de datos. Para ello, este trabajo propone métodos capaces de seleccionarestrategias óptimas de preprocesamiento que se entrenan utilizando los datos históricos disponibles,minimizando las funciones de perdida empíricas.En concreto, esta tesis estudia los procesos de compresión de series temporales, unión de variables,imputación de observaciones y generación de modelos subrogados. En cada uno de ellos se persigue laselección y combinación óptima de múltiples estrategias. Este enfoque se define en función de lascaracterísticas de los datos y de las propiedades y limitaciones del sistema definidas por el usuario

    Sensing and Compression Techniques for Environmental and Human Sensing Applications

    Get PDF
    In this doctoral thesis, we devise and evaluate a variety of lossy compression schemes for Internet of Things (IoT) devices such as those utilized in environmental wireless sensor networks (WSNs) and Body Sensor Networks (BSNs). We are especially concerned with the efficient acquisition of the data sensed by these systems and to this end we advocate the use of joint (lossy) compression and transmission techniques. Environmental WSNs are considered first. For these, we present an original compressive sensing (CS) approach for the spatio-temporal compression of data. In detail, we consider temporal compression schemes based on linear approximations as well as Fourier transforms, whereas spatial and/or temporal dynamics are exploited through compression algorithms based on distributed source coding (DSC) and several algorithms based on compressive sensing (CS). To the best of our knowledge, this is the first work presenting a systematic performance evaluation of these (different) lossy compression approaches. The selected algorithms are framed within the same system model, and a comparative performance assessment is carried out, evaluating their energy consumption vs the attainable compression ratio. Hence, as a further main contribution of this thesis, we design and validate a novel CS-based compression scheme, termed covariogram-based compressive sensing (CB-CS), which combines a new sampling mechanism along with an original covariogram-based approach for the online estimation of the covariance structure of the signal. As a second main research topic, we focus on modern wearable IoT devices which enable the monitoring of vital parameters such as heart or respiratory rates (RESP), electrocardiography (ECG), and photo-plethysmographic (PPG) signals within e-health applications. These devices are battery operated and communicate the vital signs they gather through a wireless communication interface. A common issue of this technology is that signal transmission is often power-demanding and this poses serious limitations to the continuous monitoring of biometric signals. To ameliorate this, we advocate the use of lossy signal compression at the source: this considerably reduces the size of the data that has to be sent to the acquisition point by, in turn, boosting the battery life of the wearables and allowing for fine-grained and long-term monitoring. Considering one dimensional biosignals such as ECG, RESP and PPG, which are often available from commercial wearable devices, we first provide a throughout review of existing compression algorithms. Hence, we present novel approaches based on online dictionaries, elucidating their operating principles and providing a quantitative assessment of compression, reconstruction and energy consumption performance of all schemes. As part of this first investigation, dictionaries are built using a suboptimal but lightweight, online and best effort algorithm. Surprisingly, the obtained compression scheme is found to be very effective both in terms of compression efficiencies and reconstruction accuracy at the receiver. This approach is however not yet amenable to its practical implementation as its memory usage is rather high. Also, our systematic performance assessment reveals that the most efficient compression algorithms allow reductions in the signal size of up to 100 times, which entail similar reductions in the energy demand, by still keeping the reconstruction error within 4 % of the peak-to-peak signal amplitude. Based on what we have learned from this first comparison, we finally propose a new subject-specific compression technique called SURF Subject-adpative Unsupervised ecg compressor for weaRable Fitness monitors. In SURF, dictionaries are learned and maintained using suitable neural network structures. Specifically, learning is achieve through the use of neural maps such as self organizing maps and growing neural gas networks, in a totally unsupervised manner and adapting the dictionaries to the signal statistics of the wearer. As our results show, SURF: i) reaches high compression efficiencies (reduction in the signal size of up to 96 times), ii) allows for reconstruction errors well below 4 % (peak-to-peak RMSE, errors of 2 % are generally achievable), iii) gracefully adapts to changing signal statistics due to switching to a new subject or changing their activity, iv) has low memory requirements (lower than 50 kbytes) and v) allows for further reduction in the total energy consumption (processing plus transmission). These facts makes SURF a very promising algorithm, delivering the best performance among all the solutions proposed so far

    Resource Management for Edge Computing in Internet of Things (IoT)

    Get PDF
    Die große Anzahl an Geräten im Internet der Dinge (IoT) und deren kontinuierliche Datensammlungen führen zu einem rapiden Wachstum der gesammelten Datenmenge. Die Daten komplett mittels zentraler Cloud Server zu verarbeiten ist ineffizient und zum Teil sogar unmöglich oder unnötig. Darum wird die Datenverarbeitung an den Rand des Netzwerks verschoben, was zu den Konzepten des Edge Computings geführt hat. Informationsverarbeitung nahe an der Datenquelle (z.B. auf Gateways und Edge Geräten) reduziert nicht nur die hohe Arbeitslast zentraler Server und Netzwerke, sondern verringer auch die Latenz für Echtzeitanwendungen, da die potentiell unzuverlässige Kommunikation zu Cloud Servern mit ihrer unvorhersehbaren Netzwerklatenz vermieden wird. Aktuelle IoT Architekturen verwenden Gateways, um anwendungsspezifische Verbindungen zu IoT Geräten herzustellen. In typischen Konfigurationen teilen sich mehrere IoT Edge Geräte ein IoT Gateway. Wegen der begrenzten verfügbaren Bandbreite und Rechenkapazität eines IoT Gateways muss die Servicequalität (SQ) der verbundenen IoT Edge Geräte über die Zeit angepasst werden. Nicht nur um die Anforderungen der einzelnen Nutzer der IoT Geräte zu erfüllen, sondern auch um die SQBedürfnisse der anderen IoT Edge Geräte desselben Gateways zu tolerieren. Diese Arbeit untersucht zuerst essentielle Technologien für IoT und existierende Trends. Dabei werden charakteristische Eigenschaften von IoT für die Embedded Domäne, sowie eine umfassende IoT Perspektive für Eingebettete Systeme vorgestellt. Mehrere Anwendungen aus dem Gesundheitsbereich werden untersucht und implementiert, um ein Model für deren Datenverarbeitungssoftware abzuleiten. Dieses Anwendungsmodell hilft bei der Identifikation verschiedener Betriebsmodi. IoT Systeme erwarten von den Edge Geräten, dass sie mehrere Betriebsmodi unterstützen, um sich während des Betriebs an wechselnde Szenarien anpassen zu können. Z.B. Energiesparmodi bei geringen Batteriereserven trotz gleichzeitiger Aufrechterhaltung der kritischen Funktionalität oder einen Modus, um die Servicequalität auf Wunsch des Nutzers zu erhöhen etc. Diese Modi verwenden entweder verschiedene Auslagerungsschemata (z.B. die übertragung von Rohdaten, von partiell bearbeiteten Daten, oder nur des finalen Ergebnisses) oder verschiedene Servicequalitäten. Betriebsmodi unterscheiden sich in ihren Ressourcenanforderungen sowohl auf dem Gerät (z.B. Energieverbrauch), wie auch auf dem Gateway (z.B. Kommunikationsbandbreite, Rechenleistung, Speicher etc.). Die Auswahl des besten Betriebsmodus für Edge Geräte ist eine Herausforderung in Anbetracht der begrenzten Ressourcen am Rand des Netzwerks (z.B. Bandbreite und Rechenleistung des gemeinsamen Gateways), diverser Randbedingungen der IoT Edge Geräte (z.B. Batterielaufzeit, Servicequalität etc.) und der Laufzeitvariabilität am Rand der IoT Infrastruktur. In dieser Arbeit werden schnelle und effiziente Auswahltechniken für Betriebsmodi entwickelt und präsentiert. Wenn sich IoT Geräte in der Reichweite mehrerer Gateways befinden, ist die Verwaltung der gemeinsamen Ressourcen und die Auswahl der Betriebsmodi für die IoT Geräte sogar noch komplexer. In dieser Arbeit wird ein verteilter handelsorientierter Geräteverwaltungsmechanismus für IoT Systeme mit mehreren Gateways präsentiert. Dieser Mechanismus zielt auf das kombinierte Problem des Bindens (d.h. ein Gateway für jedes IoT Gerät bestimmen) und der Allokation (d.h. die zugewiesenen Ressourcen für jedes Gerät bestimmen) ab. Beginnend mit einer initialen Konfiguration verhandeln und kommunizieren die Gateways miteinander und migrieren IoT Geräte zwischen den Gateways, wenn es den Nutzen für das Gesamtsystem erhöht. In dieser Arbeit werden auch anwendungsspezifische Optimierungen für IoT Geräte vorgestellt. Drei Anwendungen für den Gesundheitsbereich wurden realisiert und für tragbare IoT Geräte untersucht. Es wird auch eine neuartige Kompressionsmethode vorgestellt, die speziell für IoT Anwendungen geeignet ist, die Bio-Signale für Gesundheitsüberwachungen verarbeiten. Diese Technik reduziert die zu übertragende Datenmenge des IoT Gerätes, wodurch die Ressourcenauslastung auf dem Gerät und dem gemeinsamen Gateway reduziert wird. Um die vorgeschlagenen Techniken und Mechanismen zu evaluieren, wurden einige Anwendungen auf IoT Plattformen untersucht, um ihre Parameter, wie die Ausführungszeit und Ressourcennutzung, zu bestimmen. Diese Parameter wurden dann in einem Rahmenwerk verwendet, welches das IoT Netzwerk modelliert, die Interaktion zwischen Geräten und Gateway erfasst und den Kommunikationsoverhead sowie die erreichte Batterielebenszeit und Servicequalität der Geräte misst. Die Algorithmen zur Auswahl der Betriebsmodi wurden zusätzlich auf IoT Plattformen implementiert, um ihre Overheads bzgl. Ausführungszeit und Speicherverbrauch zu messen
    • …
    corecore