211 research outputs found

    Sensors for Vital Signs Monitoring

    Get PDF
    Sensor technology for monitoring vital signs is an important topic for various service applications, such as entertainment and personalization platforms and Internet of Things (IoT) systems, as well as traditional medical purposes, such as disease indication judgments and predictions. Vital signs for monitoring include respiration and heart rates, body temperature, blood pressure, oxygen saturation, electrocardiogram, blood glucose concentration, brain waves, etc. Gait and walking length can also be regarded as vital signs because they can indirectly indicate human activity and status. Sensing technologies include contact sensors such as electrocardiogram (ECG), electroencephalogram (EEG), photoplethysmogram (PPG), non-contact sensors such as ballistocardiography (BCG), and invasive/non-invasive sensors for diagnoses of variations in blood characteristics or body fluids. Radar, vision, and infrared sensors can also be useful technologies for detecting vital signs from the movement of humans or organs. Signal processing, extraction, and analysis techniques are important in industrial applications along with hardware implementation techniques. Battery management and wireless power transmission technologies, the design and optimization of low-power circuits, and systems for continuous monitoring and data collection/transmission should also be considered with sensor technologies. In addition, machine-learning-based diagnostic technology can be used for extracting meaningful information from continuous monitoring data

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    General Dynamic Surface Reconstruction: Application to the 3D Segmentation of the Left Ventricle

    Get PDF
    Aquesta tesi descriu la nostra contribució a la reconstrucció tridimensional de les superfícies interna i externa del ventricle esquerre humà. La reconstrucció és un primer procés dins d'una aplicació global de Realitat Virtual dissenyada com una important eina de diagnòstic per a hospitals. L'aplicació parteix de la reconstrucció de les superfícies i proveeix a l'expert de manipulació interactiva del model en temps real, a més de càlculs de volums i de altres paràmetres d'interès. El procés de recuperació de les superfícies es caracteritza per la seva velocitat de convergència, la suavitat a les malles finals i la precisió respecte de les dades recuperades. Donat que el diagnòstic de patologies cardíaques requereix d'experiència, temps i molt coneixement professional, la simulació és un procés clau que millora la eficiència.Els nostres algorismes i implementacions han estat aplicats a dades sintètiques i reals amb diferències relatives a la quantitat de dades inexistents, casuístiques presents a casos patològics i anormals. Els conjunts de dades inclouen adquisicions d'instants concrets i de cicles cardíacs complets. La bondat del sistema de reconstrucció ha estat avaluada mitjançant paràmetres mèdics per a poder comparar els nostres resultats finals amb aquells derivats a partir de programari típic utilitzat pels professionals de la medicina.A més de l'aplicació directa al diagnòstic mèdic, la nostra metodologia permet reconstruccions de tipus genèric en el camp dels Gràfics 3D per ordinador. Les nostres reconstruccions permeten generar models tridimensionals amb un baix cost en quant a la interacció manual necessària i a la càrrega computacional associada. Altrament, el nostre mètode pot entendre's com un robust algorisme de triangularització que construeix superfícies partint de núvols de punts que poden obtenir-se d'escàners làser o sensors magnètics, per exemple.Esta tesis describe nuestra contribución a la reconstrucción tridimensional de las superficies interna y externa del ventrículo izquierdo humano. La reconstrucción es un primer proceso que forma parte de una aplicación global de Realidad Virtual diseñada como una importante herramienta de diagnóstico para hospitales. La aplicación parte de la reconstrucción de las superficies y provee al experto de manipulación interactiva del modelo en tiempo real, además de cálculos de volúmenes y de otros parámetros de interés. El proceso de recuperación de las superficies se caracteriza por su velocidad de convergencia, la suavidad en las mallas finales y la precisión respecto de los datos recuperados. Dado que el diagnóstico de patologías cardíacas requiere experiencia, tiempo y mucho conocimiento profesional, la simulación es un proceso clave que mejora la eficiencia.Nuestros algoritmos e implementaciones han sido aplicados a datos sintéticos y reales con diferencias en cuanto a la cantidad de datos inexistentes, casuística presente en casos patológicos y anormales. Los conjuntos de datos incluyen adquisiciones de instantes concretos y de ciclos cardíacos completos. La bondad del sistema de reconstrucción ha sido evaluada mediante parámetros médicos para poder comparar nuestros resultados finales con aquellos derivados a partir de programario típico utilizado por los profesionales de la medicina.Además de la aplicación directa al diagnóstico médico, nuestra metodología permite reconstrucciones de tipo genérico en el campo de los Gráficos 3D por ordenador. Nuestras reconstrucciones permiten generar modelos tridimensionales con un bajo coste en cuanto a la interacción manual necesaria y a la carga computacional asociada. Por otra parte, nuestro método puede entenderse como un robusto algoritmo de triangularización que construye superficies a partir de nubes de puntos que pueden obtenerse a partir de escáneres láser o sensores magnéticos, por ejemplo.This thesis describes a contribution to the three-dimensional reconstruction of the internal and external surfaces of the human's left ventricle. The reconstruction is a first process fitting in a complete VR application that will serve as an important diagnosis tool for hospitals. Beginning with the surfaces reconstruction, the application will provide volume and interactive real-time manipulation to the model. We focus on speed, precision and smoothness for the final surfaces. As long as heart diseases diagnosis requires experience, time and professional knowledge, simulation is a key-process that enlarges efficiency.The algorithms and implementations have been applied to both synthetic and real datasets with differences regarding missing data, present in cases where pathologies and abnormalities arise. The datasets include single acquisitions and complete cardiac cycles. The goodness of the reconstructions has been evaluated with medical parameters in order to compare our results with those retrieved by typical software used by physicians.Besides the direct application to medicine diagnosis, our methodology is suitable for generic reconstructions in the field of computer graphics. Our reconstructions can serve for getting 3D models at low cost, in terms of manual interaction and CPU computation overhead. Furthermore, our method is a robust tessellation algorithm that builds surfaces from clouds of points that can be retrieved from laser scanners or magnetic sensors, among other available hardware

    Wavelet Based Feature Extraction and Dimension Reduction for the Classification of Human Cardiac Electrogram Depolarization Waveforms

    Get PDF
    An essential task for a pacemaker or implantable defibrillator is the accurate identification of rhythm categories so that the correct electrotherapy can be administered. Because some rhythms cause a rapid dangerous drop in cardiac output, it is necessary to categorize depolarization waveforms on a beat-to-beat basis to accomplish rhythm classification as rapidly as possible. In this thesis, a depolarization waveform classifier based on the Lifting Line Wavelet Transform is described. It overcomes problems in existing rate-based event classifiers; namely, (1) they are insensitive to the conduction path of the heart rhythm and (2) they are not robust to pseudo-events. The performance of the Lifting Line Wavelet Transform based classifier is illustrated with representative examples. Although rate based methods of event categorization have served well in implanted devices, these methods suffer in sensitivity and specificity when atrial, and ventricular rates are similar. Human experts differentiate rhythms by morphological features of strip chart electrocardiograms. The wavelet transform is a simple approximation of this human expert analysis function because it correlates distinct morphological features at multiple scales. The accuracy of implanted rhythm determination can then be improved by using human-appreciable time domain features enhanced by time scale decomposition of depolarization waveforms. The purpose of the present work was to determine the feasibility of implementing such a system on a limited-resolution platform. 78 patient recordings were split into equal segments of reference, confirmation, and evaluation sets. Each recording had a sampling rate of 512Hz, and a significant change in rhythm in the recording. The wavelet feature generator implemented in Matlab performs anti-alias pre-filtering, quantization, and threshold-based event detection, to produce indications of events to submit to wavelet transformation. The receiver operating characteristic curve was used to rank the discriminating power of the feature accomplishing dimension reduction. Accuracy was used to confirm the feature choice. Evaluation accuracy was greater than or equal to 95% over the IEGM recordings

    Information Processing for Biological Signals: Application to Laser Doppler Vibrometry

    Get PDF
    Signals associated with biological activity in the human body can be of great value in clinical and security applications. Since direct measurements of critical biological activity are often difficult to acquire noninvasively, many biological signals are measured from the surface of the skin. This simplifies the signal acquisition, but complicates post processing tasks. Modeling these signals using the underlying physics may not be accurate due to the inherent complexities of the human body. The appropriate use of such models depends on the application of interest. Models developed in this dissertation are motivated by underlying physiology and physics, and are capable of expressing a wide range of signal variability without explicitly invoking physical quantities. An approach for the processing of biological signals is developed using graphical models. Graphical models describe conditional dependence between random variables on a graph. When the graph is a tree, efficient algorithms exist to compute sum-marginals or max-marginals of the joint distribution. Some of the variables correspond to the measured signal, while others may represent the hidden internal dynamics that generate the observed data. Three levels of hidden dynamics are outlined, which enable models to be constructed that track internal dynamics on differing time scales. Expectation maximization algorithms are used to compute parameter estimates. Experimental results of this approach are presented for a novel method of recording bio-mechanical activity using a Laser Doppler Vibrometer. The LDV measures surface velocity on the basis of the Doppler shift. This device is targeted on the neck overlying the carotid artery, and the proximity of the carotid to the skin results in a strong signal. Vibrations and movements from within the carotid are transmitted to the surface of the skin, where they are sensed by the LDV. Changes in the size of the carotid due to variations in blood pressure are sensed at the skin surface. In addition, breathing activity may be inferred from the LDV signal. Individualized models are evaluated systematically on LDV data sets that were acquired under resting conditions on multiple occasions. Model fit is evaluated both within and across recording sessions. Model parameters are interpreted in terms of the underlying physiology. Pressure wave physics in a series of elastic tubes is presented to explore the underlying physics of blood flow in the carotid. Mechanical movements of the carotid walls are related to the underlying pressure, and therefore the cardiovascular activity of the heart and vasculature. This analysis motivates a model that can be estimated from experimental data. Resulting models are interpreted for the LDV signal. The graphical models are applied to the problem of identity verification using the LDV signal. Identity verification is an important problem in which the claimed identity is either accepted or rejected by an automated system. The system design that is used is based on a loglikelihood ratio test using models that are trained during an enrollment phase. A score is computed and compared to a threshold. Performance is given in the form of False Nonmatch and False Match empirical error rates as a function of the threshold. Confidence intervals are computed that take into account correlations between the system decisions

    Integration of EEG-FMRI in an Auditory Oddball Paradigm Using Joint Independent Component Analysis

    Get PDF
    The integration of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) can contribute to characterizing neural networks with high temporal and spatial resolution. The overall objective of this dissertation is to determine the sensitivity and limitations of joint independent component analysis (jICA) within-subject for integration of ERP and fMRI data collected simultaneously in a parametric auditory oddball paradigm. The main experimental finding in this work is that jICA revealed significantly stronger and more extensive activity in brain regions associated with the auditory P300 ERP than a P300 linear regression analysis, both at the group level and within-subject. The results suggest that, with the incorporation of spatial and temporal information from both imaging modalities, jICA is more sensitive to neural sources commonly observed with ERP and fMRI compared to a linear regression analysis. Furthermore, computational simulations suggest that jICA can extract linear and nonlinear relationships between ERP and fMRI signals, as well as uncoupled sources (i.e., sources with a signal in only one imaging modality). These features of jICA can be important for assessing disease states in which the relationship between the ERP and fMRI signals is unknown, as well as pathological conditions causing neurovascular uncoupling, such as stroke

    Foetal echocardiographic segmentation

    Get PDF
    Congenital heart disease affects just under one percentage of all live births [1]. Those defects that manifest themselves as changes to the cardiac chamber volumes are the motivation for the research presented in this thesis. Blood volume measurements in vivo require delineation of the cardiac chambers and manual tracing of foetal cardiac chambers is very time consuming and operator dependent. This thesis presents a multi region based level set snake deformable model applied in both 2D and 3D which can automatically adapt to some extent towards ultrasound noise such as attenuation, speckle and partial occlusion artefacts. The algorithm presented is named Mumford Shah Sarti Collision Detection (MSSCD). The level set methods presented in this thesis have an optional shape prior term for constraining the segmentation by a template registered to the image in the presence of shadowing and heavy noise. When applied to real data in the absence of the template the MSSCD algorithm is initialised from seed primitives placed at the centre of each cardiac chamber. The voxel statistics inside the chamber is determined before evolution. The MSSCD stops at open boundaries between two chambers as the two approaching level set fronts meet. This has significance when determining volumes for all cardiac compartments since cardiac indices assume that each chamber is treated in isolation. Comparison of the segmentation results from the implemented snakes including a previous level set method in the foetal cardiac literature show that in both 2D and 3D on both real and synthetic data, the MSSCD formulation is better suited to these types of data. All the algorithms tested in this thesis are within 2mm error to manually traced segmentation of the foetal cardiac datasets. This corresponds to less than 10% of the length of a foetal heart. In addition to comparison with manual tracings all the amorphous deformable model segmentations in this thesis are validated using a physical phantom. The volume estimation of the phantom by the MSSCD segmentation is to within 13% of the physically determined volume

    Advances in Computer Recognition, Image Processing and Communications, Selected Papers from CORES 2021 and IP&C 2021

    Get PDF
    As almost all human activities have been moved online due to the pandemic, novel robust and efficient approaches and further research have been in higher demand in the field of computer science and telecommunication. Therefore, this (reprint) book contains 13 high-quality papers presenting advancements in theoretical and practical aspects of computer recognition, pattern recognition, image processing and machine learning (shallow and deep), including, in particular, novel implementations of these techniques in the areas of modern telecommunications and cybersecurity
    corecore