2,252 research outputs found

    Matrix pencil method for vital sign detection from signals acquired by microwave sensors

    Get PDF
    Microwave sensors have recently been introduced as high-temporal resolution sensors, which could be used in the contactless monitoring of artery pulsation and breathing. However, accurate and efficient signal processing methods are still required. In this paper, the matrix pencil method (MPM), as an efficient method with good frequency resolution, is applied to back-reflected microwave signals to extract vital signs. It is shown that decomposing of the signal to its damping exponentials fulfilled by MPM gives the opportunity to separate signals, e.g., breathing and heartbeat, with high precision. A publicly online dataset (GUARDIAN), obtained by a continuous wave microwave sensor, is applied to evaluate the performance of MPM. Two methods of bandpass filtering (BPF) and variational mode decomposition (VMD) are also implemented. In addition to the GUARDIAN dataset, these methods are also applied to signals acquired by an ultra-wideband (UWB) sensor. It is concluded that when the vital sign is sufficiently strong and pure, all methods, e.g., MPM, VMD, and BPF, are appropriate for vital sign monitoring. However, in noisy cases, MPM has better performance. Therefore, for non-contact microwave vital sign monitoring, which is usually subject to noisy situations, MPM is a powerful method

    CNN AND LSTM FOR THE CLASSIFICATION OF PARKINSON'S DISEASE BASED ON THE GTCC AND MFCC

    Get PDF
    Parkinson's disease is a recognizable clinical syndrome with a variety of causes and clinical presentations; it represents a rapidly growing neurodegenerative disorder. Since about 90 percent of Parkinson's disease sufferers have some form of early speech impairment, recent studies on tele diagnosis of Parkinson's disease have focused on the recognition of voice impairments from vowel phonations or the subjects' discourse. In this paper, we present a new approach for Parkinson's disease detection from speech sounds that are based on CNN and LSTM and uses two categories of characteristics Mel Frequency Cepstral Coefficients (MFCC) and Gammatone Cepstral Coefficients (GTCC) obtained from noise-removed speech signals with comparative EMD-DWT and DWT-EMD analysis. The proposed model is divided into three stages. In the first step, noise is removed from the signals using the EMD-DWT and DWT-EMD methods. In the second step, the GTCC and MFCC are extracted from the enhanced audio signals. The classification process is carried out in the third step by feeding these features into the LSTM and CNN models, which are designed to define sequential information from the extracted features. The experiments are performed using PC-GITA and Sakar datasets and 10-fold cross validation method, the highest classification accuracy for the Sakar dataset reached 100% for both EMD-DWT-GTCC-CNN and DWT-EMD-GTCC-CNN, and for the PC-GITA dataset, the accuracy is reached 100% for EMD-DWT-GTCC-CNN and 96.55% for DWT-EMD-GTCC-CNN. The results of this study indicate that the characteristics of GTCC are more appropriate and accurate for the assessment of PD than MFCC

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    A comparative study of single-channel signal processing methods in fetal phonocardiography

    Get PDF
    Fetal phonocardiography is a non-invasive, completely passive and low-cost method based on sensing acoustic signals from the maternal abdomen. However, different types of interference are sensed along with the desired fetal phonocardiography. This study focuses on the comparison of fetal phonocardiography filtering using eight algorithms: Savitzky-Golay filter, finite impulse response filter, adaptive wavelet transform, maximal overlap discrete wavelet transform, variational mode decomposition, empirical mode decomposition, ensemble empirical mode decomposition, and complete ensemble empirical mode decomposition with adaptive noise. The effectiveness of those methods was tested on four types of interference (maternal sounds, movement artifacts, Gaussian noise, and ambient noise) and eleven combinations of these disturbances. The dataset was created using two synthetic records r01 and r02, where the record r02 was loaded with higher levels of interference than the record r01. The evaluation was performed using the objective parameters such as accuracy of the detection of S1 and S2 sounds, signal-to-noise ratio improvement, and mean error of heart interval measurement. According to all parameters, the best results were achieved using the complete ensemble empirical mode decomposition with adaptive noise method with average values of accuracy = 91.53% in the detection of S1 and accuracy = 68.89% in the detection of S2. The average value of signal-to-noise ratio improvement achieved by complete ensemble empirical mode decomposition with adaptive noise method was 9.75 dB and the average value of the mean error of heart interval measurement was 3.27 ms.Web of Science178art. no. e026988

    Improved Vehicle-Bridge Interaction Modeling and Automation of Bridge System Identification Techniques

    Get PDF
    The Federal Highway Administration (FHWA) recognizes the necessity for cost-effective and practical system identification (SI) techniques within structural health monitoring (SHM) frameworks for asset management applications. Indirect health monitoring (IHM), a promising SHM approach, utilizes accelerometer-equipped vehicles to measure bridge modal properties (e.g., natural frequencies, damping ratios, mode shapes) through bridge vibration data to assess the bridge\u27s condition. However, engineers and researchers often encounter noise from road roughness, environmental factors, and vehicular components in collected vehicle signals. This noise contaminates the vehicle signal with spurious modes corresponding to stochastic frequencies, impacting damage monitoring assessments. Thus, an efficient and reliable SI technique is required to process vehicle signals and extract bridge features effectively before practical deployment. To achieve this, vehicle-bridge interaction (VBI) models are often developed to simulate physical data for either initial verification of SI methodologies or for use in a model-updating algorithm to determine the bridge modal properties by tuning the model to the physical data. Common steps in the SI process include signal processing of the raw data, operational modal analysis (OMA), and leveraging machine learning (ML). This dissertation proposes a framework for efficient creation of VBI models using commercial code, develops an autonomous SI technique (APPVMD) to extract bridge frequencies from passing vehicles, provides guidelines for improving bridge frequency extraction with multi-vehicle scenarios via an extensive analytical study, demonstrates the need for improved methodologies for simulating road surface roughness effects in VBI models via comparison with physical data, and provides a substantial archive of test data and models that can be leveraged in future studies (road surface profiles using laser profilometers and vehicle acceleration data from four-post shaker testing with the associated vehicle model). The work encompasses four major studies aimed at achieving these research objectives. The first study presents a computationally efficient VBI modeling framework in commercial finite element (FE) software (Abaqus) requiring minimal user coding, suitable for industrial and research communities. The framework\u27s dynamic response is verified using literature data, and a damage modeling methodology is proposed to extend the framework to SHM applications with SI techniques in IHM. In the second study, an autonomous peak-picking variational mode decomposition (APPVMD) framework is introduced to enhance scalability in SI techniques for the IHM of a bridge network. APPVMD leverages signal processing techniques and heuristic models to autonomously extract bridge frequencies from vehicle acceleration responses without prior information or model-informed training. The framework is tested on different vehicles and bridge classes to assess its feasibility, achieving successful bridge frequency extractions in many cases. In the third study, an extensive parametric study is undertaken to determine if multiple vehicle scenarios would enhance bridge frequency identification and what vehicle types and driving speeds would be most effective. Four vehicles are considered representative of true vehicle properties found in the literature, and six bridges are taken from physical bridge data, including drawings obtained from both the literature and the South Carolina Department of Transportation (SCDOT). The study unveils interesting phenomena regarding the complex interaction between vehicles and bridges, performs brief case studies to improve bridge frequency extractions further, and proposes guidelines that researchers and engineers can follow when preparing to collect acceleration data from vehicles for bridge SI. The fourth study presents preliminary work to experimentally show that current methodologies for representing road surface roughness effects are insufficient. First, a vehicle model is developed to include road surface roughness effects and compared with experimental data collected in a previous study at Clemson University. The results suggest that commonly referenced roughness factors in the literature underestimate road surface roughness effects while inputting the average values based on road class from the ISO-8608 standard tends to exaggerate road surface roughness effects. A moving-average filter (MAF) was found to help attenuate noise but requires appropriate parameter selection. Recommendations for improving road surface roughness modeling in VBI problems are provided. Further work is conducted on a BMW 535 Xi with enhanced ride quality, including verification exercises using a four-post shaker and extensive road tests for real-life road roughness measurements during driving. The study concludes with the suggested path forward for utilizing collected data. It suggests additional tests that can unveil the tire behavior during road tests to compute the transfer function between the road surface roughness and the unsprung masses in VBI models. This dissertation concludes by summarizing the contributions made to the field IHM of bridges and outlines the next steps for future research

    A compressible Lagrangian framework for the simulation of underwater implosion problems

    Get PDF
    The development of efficient algorithms to understand implosion dynamics presents a number of challenges. The foremost challenge is to efficiently represent the coupled compressible fluid dynamics of internal air and surrounding water. Secondly, the method must allow one to accurately detect or follow the interface between the phases. Finally, it must be capable of resolving any shock waves which may be created in air or water during the final stage of the collapse. We present a fully Lagrangian compressible numerical framework for the simulation of underwater implosion. Both air and water are considered compressible and the equations for the Lagrangian shock hydrodynamics are stabilized via a variationally consistent multiscale method [109]. A nodally perfect matched definition of the interface is used [57, 25] and then the kinetic variables, pressure and density, are duplicated at the interface level. An adaptive mesh generation procedure, which respects the interface connectivities, is applied to provide enough refinement at the interface level. This framework is then used to simulate the underwater implosion of a large cylindrical bubble, with a size in the order of cm. Rapid collapse and growth of the bubble occurred on very small spatial (0.3mm), and time (0.1ms) scales followed by Rayleigh-Taylor instabilities at the interface, in addition to the shock waves traveling in the fluid domains are among the phenomena that are observed in the simulation. We then extend our framework to model the underwater implosion of a cylindrical aluminum container considering a monolithic fluid-structure interaction (FSI). The aluminum cylinder, which separates the internal atmospheric-pressure air from the external high-pressure water, is modeled by a three node rotation-free shell element. The cylinder undergoes fast transient deformations, large enough to produce self-contact along it. A novel elastic frictionless contact model is used to detect contact and compute the non-penetrating forces in the discretized domain between the mid-planes of the shell. Two schemes are tested, implicit using the predictor/multi-corrector Bossak scheme, and explicit, using the forward Euler scheme. The results of the two simulations are compared with experimental data.El desarrollo de métodos eficientes para modelar la dinámica de implosión presenta varios desafíos. El primero es una representación eficaz de la dinámica del sistema acoplado de aire-agua. El segundo es que el método tiene que permitir una detección exacta o un seguimiento adecuado de la interfase entre ambas fases. Por último el método tiene que ser capaz de resolver cualquier choque que podría generar en el aire o en el agua, sobre todo en la última fase del colapso. Nosotros presentamos un método numérico compresible y totalmente Lagrangiano para simular la implosión bajo el agua. Tanto el aire como el agua se consideran compresibles y las ecuaciones Lagrangianos para la hidrodinámica del choque se estabilizan mediante un método multiescala que es variacionalmente consistente [109]. Se utiliza una definición de interfase que coincide perfectamente con los nodos [57, 25]. Ésta, nos facilita duplicar eficazmente las variables cinéticas como la presión y la densidad en los nodos de la interfase. Con el fin de obtener suficiente resolución alrededor de la interfase, la malla se genera de forma adaptativa y respetando la posición de la interfase. A continuación el método desarrollado se utiliza para simular la implosión bajo el agua de una burbuja cilíndrica del tamaño de un centímetro. Varios fenómenos se han capturado durante el colapso: un ciclo inmediato de colapso-crecimiento de la burbuja que ocurre en un espacio (0.3mm) y tiempo (0.1ms) bastante limitado, aparición de inestabilidades de tipo Rayleigh-Taylor en la interfase y formaron de varias ondas de choque que viajan tanto en el agua como en el aire. Después, seguimos el desarrollo del método para modelar la implosión bajo el agua de un contenedor metálico considerando una interacción monolítica de fluido y estructura. El cilindro de aluminio, que a su vez contiene aire a presión atmosférica y está rodeada de agua en alta presión, se modelando con elementos de lámina de tres nodos y sin grados de libertad de rotación. El cilindro se somete a deformaciones transitorias suficientemente rápidos y enormes hasta llegar a colapsar. Un nuevo modelo elástico de contacto sin considerar la fricción se ha desarrollado para detectar el contacto y calcular las fuerzas en el dominio discretizado entre las superficies medianas de las laminas. Dos esquemas temporales están considerados, uno es implícito utilizando el método de Bossak y otro es explícito utilizando Forward Euler. Al final los resultados de ambos casos se comparan con los resultados experimentales

    A compressible Lagrangian framework for the simulation of underwater implosion problems

    Get PDF
    The development of efficient algorithms to understand implosion dynamics presents a number of challenges. The foremost challenge is to efficiently represent the coupled compressible fluid dynamics of internal air and surrounding water. Secondly, the method must allow one to accurately detect or follow the interface between the phases. Finally, it must be capable of resolving any shock waves which may be created in air or water during the final stage of the collapse. We present a fully Lagrangian compressible numerical framework for the simulation of underwater implosion. Both air and water are considered compressible and the equations for the Lagrangian shock hydrodynamics are stabilized via a variationally consistent multiscale method. A nodally perfect matched definition of the interface is used and then the kinetic variables, pressure and density, are duplicated at the interface level. An adaptive mesh generation procedure, which respects the interface connectivities, is applied to provide enough refinement at the interface level. This framework is then used to simulate the underwater implosion of a large cylindrical bubble, with a size in the order of cm. Rapid collapse and growth of the bubble occurred on very small spatial (0.3mm), and time (0.1ms) scales followed by Rayleigh-Taylor instabilities at the interface, in addition to the shock waves traveling in the fluid domains are among the phenomena that are observed in the simulation. We then extend our framework to model the underwater implosion of a cylindrical aluminum container considering a monolithic fluid-structure interaction (FSI). The aluminum cylinder, which separates the internal atmospheric-pressure air from the external high-pressure water, is modeled by a three node rotation-free shell element. The cylinder undergoes fast transient deformations, large enough to produce self-contact along it. A novel elastic frictionless contact model is used to detect contact and compute the non-penetrating forces in the discretized domain between the mid-planes of the shell. Two schemes are tested, implicit using the predictor/multi-corrector Bossak scheme, and explicit, using the forward Euler scheme. The results of the two simulations are compared with experimental data

    Vital Signs Monitoring Based On UWB Radar

    Get PDF
    Contactless detection of human vital sign using radar sensors appears to be a promising technology which integrates communication, biomedicine, computer science etc. The radar-based vital sign detection has been actively investigated in the past decade. Due to the advantages such as wide bandwidth, high resolution, small and portable size etc., ultra-wideband (UWB) radar has received a great deal of attention in the health care field. In this thesis, an X4 series UWB radar developed by Xethru Company is adopted to detect human breathing signals through the radar echo reflected by the chest wall movement caused by breath and heartbeat. The emphasis is placed on the estimation of breathing and heart rate based on several signal processing algorithms. Firstly, the research trend of vital sign detection using radar technology is reviewed, based on which the advantages of contactless detection and UWB radar-based technology are highlighted. Then theoretical basis and core algorithms of radar signals detection are presented. Meanwhile, the detection system based on Xethru UWB radar is introduced. Next, several preprocessing methods including SVD-based clutter and noise removal algorithms, the largest variance-based target detection method, and the autocorrelation-based breathing-like signal identification method are investigated, to extract the significant component containing the vital signs from the received raw radar echo signal. Then the thesis investigates four time-frequency analysis algorithms (fast Fourier transform + band-pass filter (FFT+BPF), empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD) and variational mode decomposition (VMD) and compare their performances in estimating breathing rate (BR) and heart rate (HR) in different application scenarios. A python-based vital signs detection system is designed to implement the above-mentioned preprocessing and BR and HR estimation algorithms, based on which a large number of single target experiments are undertaken to evaluate the four estimation algorithms. Specifically, the single target experiments are divided into simple setup and challenging setup. In the simple setup, testees face to radar and keep normal breathing in an almost stationary posture, while in the challenging setup, the testee is allowed to do more actions, such as site sitting, changing the breathing frequency, deep hold the breathing. It is shown that the FFT+BPF algorithm gives the highest accuracy and the fastest calculation speed under the simple setup, while in a challenging setup, the VMD algorithm has the highest accuracy and the widest applicability. Finally, double targets breathing signal detection at different distances to the radar are undertaken, aiming to observe whether the breathing signals of two targets will interfere with each other. We found that when two objects are not located at the same distance to the radar, the object closer to the radar will not affect the breath detection of the object far from the radar. When the two targets are located at the same distance, the 'Shading effect' appears in the two breathing signals and only VMD algorithm can separate the breathing signals of the targets
    • …
    corecore