347 research outputs found
CNN AND LSTM FOR THE CLASSIFICATION OF PARKINSON'S DISEASE BASED ON THE GTCC AND MFCC
Parkinson's disease is a recognizable clinical syndrome with a variety of causes and clinical presentations; it represents a rapidly growing neurodegenerative disorder. Since about 90 percent of Parkinson's disease sufferers have some form of early speech impairment, recent studies on tele diagnosis of Parkinson's disease have focused on the recognition of voice impairments from vowel phonations or the subjects' discourse. In this paper, we present a new approach for Parkinson's disease detection from speech sounds that are based on CNN and LSTM and uses two categories of characteristics Mel Frequency Cepstral Coefficients (MFCC) and Gammatone Cepstral Coefficients (GTCC) obtained from noise-removed speech signals with comparative EMD-DWT and DWT-EMD analysis. The proposed model is divided into three stages. In the first step, noise is removed from the signals using the EMD-DWT and DWT-EMD methods. In the second step, the GTCC and MFCC are extracted from the enhanced audio signals. The classification process is carried out in the third step by feeding these features into the LSTM and CNN models, which are designed to define sequential information from the extracted features. The experiments are performed using PC-GITA and Sakar datasets and 10-fold cross validation method, the highest classification accuracy for the Sakar dataset reached 100% for both EMD-DWT-GTCC-CNN and DWT-EMD-GTCC-CNN, and for the PC-GITA dataset, the accuracy is reached 100% for EMD-DWT-GTCC-CNN and 96.55% for DWT-EMD-GTCC-CNN. The results of this study indicate that the characteristics of GTCC are more appropriate and accurate for the assessment of PD than MFCC
An open access database for the evaluation of heart sound algorithms
This is an author-created, un-copyedited version of an article published in Physiological Measurement. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at https://doi.org/10.1088/0967-3334/37/12/2181In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.This work was supported by the National Institutes of Health (NIH) grant R01-EB001659 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and R01GM104987 from the National Institute of General Medical Sciences.Liu, C.; Springer, DC.; Li, Q.; Moody, B.; Abad Juan, RC.; Li, Q.; Moody, B.... (2016). An open access database for the evaluation of heart sound algorithms. Physiological Measurement. 37(12):2181-2213. doi:10.1088/0967-3334/37/12/2181S21812213371
A Compressed Sampling and Dictionary Learning Framework for WDM-Based Distributed Fiber Sensing
We propose a compressed sampling and dictionary learning framework for
fiber-optic sensing using wavelength-tunable lasers. A redundant dictionary is
generated from a model for the reflected sensor signal. Imperfect prior
knowledge is considered in terms of uncertain local and global parameters. To
estimate a sparse representation and the dictionary parameters, we present an
alternating minimization algorithm that is equipped with a pre-processing
routine to handle dictionary coherence. The support of the obtained sparse
signal indicates the reflection delays, which can be used to measure
impairments along the sensing fiber. The performance is evaluated by
simulations and experimental data for a fiber sensor system with common core
architecture.Comment: Accepted for publication in Journal of the Optical Society of America
A [ \copyright\ 2017 Optical Society of America.]. One print or electronic
copy may be made for personal use only. Systematic reproduction and
distribution, duplication of any material in this paper for a fee or for
commercial purposes, or modifications of the content of this paper are
prohibite
Frequency-modulated continuous-wave LiDAR compressive depth-mapping
We present an inexpensive architecture for converting a frequency-modulated
continuous-wave LiDAR system into a compressive-sensing based depth-mapping
camera. Instead of raster scanning to obtain depth-maps, compressive sensing is
used to significantly reduce the number of measurements. Ideally, our approach
requires two difference detectors. % but can operate with only one at the cost
of doubling the number of measurments. Due to the large flux entering the
detectors, the signal amplification from heterodyne detection, and the effects
of background subtraction from compressive sensing, the system can obtain
higher signal-to-noise ratios over detector-array based schemes while scanning
a scene faster than is possible through raster-scanning. %Moreover, we show how
a single total-variation minimization and two fast least-squares minimizations,
instead of a single complex nonlinear minimization, can efficiently recover
high-resolution depth-maps with minimal computational overhead. Moreover, by
efficiently storing only data points from measurements of an
pixel scene, we can easily extract depths by solving only two linear equations
with efficient convex-optimization methods
Distributed Fiber Ultrasonic Sensor and Pattern Recognition Analytics
Ultrasound interrogation and structural health monitoring technologies have found a wide array of applications in the health care, aerospace, automobile, and energy sectors. To achieve high spatial resolution, large array electrical transducers have been used in these applications to harness sufficient data for both monitoring and diagnoses. Electronic-based sensors have been the standard technology for ultrasonic detection, which are often expensive and cumbersome for use in large scale deployments.
Fiber optical sensors have advantageous characteristics of smaller cross-sectional area, humidity-resistance, immunity to electromagnetic interference, as well as compatibility with telemetry and telecommunications applications, which make them attractive alternatives for use as ultrasonic sensors. A unique trait of fiber sensors is its ability to perform distributed acoustic measurements to achieve high spatial resolution detection using a single fiber. Using ultrafast laser direct-writing techniques, nano-reflectors can be induced inside fiber cores to drastically improve the signal-to-noise ratio of distributed fiber sensors. This dissertation explores the applications of laser-fabricated nano-reflectors in optical fiber cores for both multi-point intrinsic Fabry–Perot (FP) interferometer sensors and a distributed phase-sensitive optical time-domain reflectometry (φ-OTDR) to be used in ultrasound detection.
Multi-point intrinsic FP interferometer was based on swept-frequency interferometry with optoelectronic phase-locked loop that interrogated cascaded FP cavities to obtain ultrasound patterns. The ultrasound was demodulated through reassigned short time Fourier transform incorporating with maximum-energy ridges tracking. With tens of centimeters cavity length, this approach achieved 20kHz ultrasound detection that was finesse-insensitive, noise-free, high-sensitivity and multiplex-scalability.
The use of φ-OTDR with enhanced Rayleigh backscattering compensated the deficiencies of low inherent signal-to-noise ratio (SNR). The dynamic strain between two adjacent nano-reflectors was extracted by using 3×3 coupler demodulation within Michelson interferometer. With an improvement of over 35 dB SNR, this was adequate for the recognition of the subtle differences in signals, such as footstep of human locomotion and abnormal acoustic echoes from pipeline corrosion. With the help of artificial intelligence in pattern recognition, high accuracy of events’ identification can be achieved in perimeter security and structural health monitoring, with further potential that can be harnessed using unsurprised learning
Characterization of damage evolution on metallic components using ultrasonic non-destructive methods
When fatigue is considered, it is expected that structures and machinery eventually fail. Still, when this damage is unexpected, besides of the negative economic impact that it produces, life of people could be potentially at risk. Thus, nowadays it is imperative that the infrastructure managers, ought to program regular inspection and maintenance for their assets; in addition, designers and materials manufacturers, can access to appropriate diagnostic tools in order to build superior and more reliable materials. In this regard, and for a number of applications, non-destructive evaluation techniques have proven to be an efficient and helpful alternative to traditional destructive assays of materials. Particularly, for the design area of materials, in recent times researchers have exploited the Acoustic Emission (AE) phenomenon as an additional assessing tool with which characterize the mechanical properties of specimens. Nevertheless, several challenges arise when treat said phenomenon, since its intensity, duration and arrival behavior is essentially stochastic for traditional signal processing means, leading to inaccuracies for the outcome assessment.
In this dissertation, efforts are focused on assisting in the characterization of the mechanical properties of advanced high strength steels during under uniaxial tensile tests. Particularly of interest, is being able to detect the nucleation and growth of a crack throughout said test. Therefore, the resulting AE waves generated by the specimen during the test are assessed with the aim of characterize their evolution.
For this, on the introduction, a brief review about non-destructive methods emphasizing the AE phenomenon is introduced. Next is presented, an exhaustive analysis with regard to the challenge and deficiencies of detecting and segmenting each AE event over a continuous data-stream with the traditional threshold detection method, and additionally, with current state of the art methods. Following, a novel AE event detection method is proposed, with the aim of overcome the aforementioned limitations. Evidence showed that the proposed method (which is based on the short-time features of the waveform of the AE signal), excels the detection capabilities of current state of the art methods, when onset and endtime precision, as well as when quality of detection and computational speed are also considered. Finally, a methodology aimed to analyze the frequency spectrum evolution of the AE phenomenon during the tensile test, is proposed. Results indicate that it is feasible to correlate nucleation and growth of a crack with the frequency content evolution of AE events.Cuando se considera la fatiga de los materiales, se espera que eventualmente las estructuras y las maquinarias fallen. Sin embargo, cuando este daño es inesperado, además del impacto económico que este produce, la vida de las personas podrÃa estar potencialmente en riesgo. Por lo que hoy en dÃa, es imperativo que los administradores de las infraestructuras deban programar evaluaciones y mantenimientos de manera regular para sus activos. De igual manera, los diseñadores y fabricantes de materiales deberÃan de poseer herramientas de diagnóstico apropiadas con el propósito de obtener mejores y más confiables materiales. En este sentido, y para un amplio número de aplicaciones, las técnicas de evaluación no destructivas han demostrado ser una útil y eficiente alternativa a los ensayos destructivos tradicionales de materiales. De manera particular, en el área de diseño de materiales, recientemente los investigadores han aprovechado el fenómeno de Emisión Acústica (EA) como una herramienta complementaria de evaluación, con la cual poder caracterizar las propiedades mecánicas de los especÃmenes. No obstante, una multitud de desafÃos emergen al tratar dicho fenómeno, ya que el comportamiento de su intensidad, duración y aparición es esencialmente estocástico desde el punto de vista del procesado de señales tradicional, conllevando a resultados imprecisos de las evaluaciones. Esta disertación se enfoca en colaborar en la caracterización de las propiedades mecánicas de Aceros Avanzados de Alta Resistencia (AAAR), para ensayos de tracción de tensión uniaxiales, con énfasis particular en la detección de fatiga, esto es la nucleación y generación de grietas en dichos componentes metálicos. Para ello, las ondas mecánicas de EA que estos especÃmenes generan durante los ensayos, son estudiadas con el objetivo de caracterizar su evolución. En la introducción de este documento, se presenta una breve revisión acerca de los métodos existentes no destructivos con énfasis particular al fenómeno de EA. A continuación, se muestra un análisis exhaustivo respecto a los desafÃos para la detección de eventos de EA y las y deficiencias del método tradicional de detección; de manera adicional se evalúa el desempeño de los métodos actuales de detección de EA pertenecientes al estado del arte. Después, con el objetivo de superar las limitaciones presentadas por el método tradicional, se propone un nuevo método de detección de actividad de EA; la evidencia demuestra que el método propuesto (basado en el análisis en tiempo corto de la forma de onda), supera las capacidades de detección de los métodos pertenecientes al estado del arte, cuando se evalúa la precisión de la detección de la llegada y conclusión de las ondas de EA; además de, cuando también se consideran la calidad de detección de eventos y la velocidad de cálculo. Finalmente, se propone una metodologÃa con el propósito de evaluar la evolución de la energÃa del espectro frecuencial del fenómeno de EA durante un ensayo de tracción; los resultados demuestran que es posible correlacionar el contenido de dicha evolución frecuencial con respecto a la nucleación y crecimiento de grietas en AAAR's.Postprint (published version
Least-Squares Wavelet Analysis and Its Applications in Geodesy and Geophysics
The Least-Squares Spectral Analysis (LSSA) is a robust method of analyzing unequally spaced and non-stationary data/time series. Although this method takes into account the correlation among the sinusoidal basis functions of irregularly spaced series, its spectrum still shows spectral leakage: power/energy leaks from one spectral peak into another. An iterative method called AntiLeakage Least-Squares Spectral Analysis (ALLSSA) is developed to attenuate the spectral leakages in the spectrum and consequently is used to regularize data series. In this study, the ALLSSA is applied to regularize and attenuate random noise in seismic data down to a certain desired level. The ALLSSA is subsequently extended to multichannel, heterogeneous and coarsely sampled seismic and related gradient measurements intended for geophysical exploration applications that require regularized (equally spaced) data free from aliasing effects.
A new and robust method of analyzing unequally spaced and non-stationary time/data series is rigorously developed. This method, namely, the Least-Squares Wavelet Analysis (LSWA), is a natural extension of the LSSA that decomposes a time series into the time-frequency domain and obtains its spectrogram. It is shown through many synthetic and experimental time/data series that the LSWA supersedes all state-of-the-art spectral analyses methods currently available, without making any assumptions about or preprocessing (editing) the time series, or even applying any empirical methods that aim to adapt a time series to the analysis method. The LSWA can analyze any non-stationary and unequally spaced time series with components of low or high amplitude and frequency variability over time, including datum shifts (offsets), trends, and constituents of known forms, and by taking into account the covariance matrix associated with the time series. The stochastic confidence level surface for the spectrogram is rigorously derived that identifies statistically significant peaks in the spectrogram at a certain confidence level;
this supersedes the empirical cone of influence used in the most popular continuous wavelet transform.
All current state-of-the-art cross-wavelet transforms and wavelet coherence analyses methods impose many stringent constraints on the properties of the time series under investigation, requiring, more often than not, preprocessing of the raw measurements that may distort their content. These methods cannot generally be used to analyze unequally spaced and non-stationary time series or even two equally spaced time series of different sampling rates, with trends and/or datum shifts, and with associated covariance matrices. To overcome the stringent requirements of these methods, a new method is developed, namely, the Least-Squares Cross-Wavelet Analysis (LSCWA), along with its statistical distribution that requires no assumptions on the series under investigation. Numerous synthetic and geoscience examples establish the LSCWA as the method of methods for rigorous coherence analysis of any experimental series
Development of electroencephalogram (EEG) signals classification techniques
Electroencephalography (EEG) is one of the most important signals recorded from
humans. It can assist scientists and experts to understand the most complex part of the
human body, the brain. Thus, analysing EEG signals is the most preponderant process
to the problem of extracting significant information from brain dynamics. It plays a
prominent role in brain studies. The EEG data are very important for diagnosing a
variety of brain disorders, such as epilepsy, sleep problems, and also assisting
disability patients to interact with their environment through brain computer interface
(BCI). However, the EEG signals contain a huge amount of information about the
brain’s activities. But the analysis and classification of these kinds of signals is still
restricted. In addition, the manual examination of these signals for diagnosing related
diseases is time consuming and sometimes does not work accurately. Several studies
have attempted to develop different analysis and classification techniques to categorise
the EEG recordings.
The analysis of EEG recordings can lead to a better understanding of the cognitive
process. It is used to extract the important features and reduce the dimensions of EEG
data. In the classification process, machine learning algorithms are used to detect the
particular class of EEG signal based on its extracted features. The performance of these
algorithms, in which the class membership of the input signal is determined, can then
be used to infer what event in the real-world process occurred to produce the input
signal. The classification procedure has the potential to assist experts to diagnose the
related brain disorders. To evaluate and diagnose neurological disorders properly, it is
necessary to develop new automatic classification techniques. These techniques will
help to classify different EEG signals and determine whether a person is in a good
health or not. This project aims to develop new techniques to enhance the analysis and
classification of different categories of EEG data.
A simple random sampling (SRS) and sequential feature selection (SFS) method
was developed and named the SRS_SFS method. In this method, firstly, a SRS
technique was used to extract statistical features from the original EEG data in time
domain. The extracted features were used as the input to a SFS algorithm for key features selection. A least square support vector machine (LS_SVM) method was then
applied for EEG signals classification to evaluate the performance of the proposed
approach.
Secondly, a novel approach that combines optimum allocation (OA) and spectral
density estimation methods was proposed to analyse EEG signals and classify an
epileptic seizure. In this study, the OA technique was introduced in two levels to
determine representative sample points from the EEG recordings. To reduce the
dimensions of sample points and extract representative features from each OA sample
segment, two power spectral density estimation methods, periodogram and
autoregressive, were used. At the end, three popular machine learning methods
(support vector machine (SVM), quadratic discriminant analysis, and k-nearest
neighbor (k-NN)) were employed to evaluate the performance of the suggested
algorithm.
Additionally, a Tunable Q-factor wavelet transform (TQWT) based algorithm was
developed for epileptic EEG feature extraction. The extracted features were forwarded
to the bagging tree, k-NN, and SVM as classifiers to evaluate the performance of the
proposed feature extraction technique. The proposed TQWT method was tested on two
different EEG databases.
Finally, a new classification system was presented for epileptic seizures detection in
EEGs blending frequency domain with information gain (InfoGain) technique. Fast
Fourier transform (FFT) or discrete wavelet transform (DWT) were applied
individually to analyse EEG recording signals into frequency bands for feature
extraction. To select the most important feature, the infoGain technique was employed.
A LS_SVM classifier was used to evaluate the performance of this system.
The research indicates that the proposed techniques are very practical and effective
for classifying epileptic EEG disorders and can assist to present the most important
clinical information about patients with brain disorders
Sensor Signal and Information Processing II
In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
- …