13 research outputs found

    Dynamic noise reduction with deep residual shrinkage networks for online fault classification

    Get PDF
    Fault signals in high-voltage (HV) power plant assets are captured using the electromagnetic interference (EMI) technique. The extracted EMI signals are taken under different conditions, introducing varying noise levels to the signals. The aim of this work is to address the varying noise levels found in captured EMI fault signals, using a deep-residual-shrinkage-network (DRSN) that implements shrinkage methods with learned thresholds to carry out de-noising for classification, along with a time-frequency signal decomposition method for feature engineering of raw time-series signals. The approach will be to train and validate several alternative DRSN architectures with previously expertly labeled EMI fault signals, with architectures then being tested on previously unseen data, the signals used will firstly be de-noised and a controlled amount of noise will be added to the signals at various levels. DRSN architectures are assessed based on their testing accuracy in the varying controlled noise levels. Results show DRSN architectures using the newly proposed residual-shrinkage-building-unit-2 (RSBU-2) to outperform the residual-shrinkage-building-unit-1 (RSBU-1) architectures in low signal-to-noise ratios. The findings show that implementing thresholding methods in noise environments provides attractive results and their methods prove to work well with real-world EMI fault signals, proving them to be sufficient for real-world EMI fault classification and condition monitoring

    EEG signal processing methods for BCI applications

    Get PDF
    Abstract Brain-computer interface (BCI) is a communication system that translates brain activity into commands for a computer or other digital device. The majority of BCI systems work by reading and interpreting cortically-evoked electro-potentials ("brain waves") via an electroencephalogram (EEG) data. The EEG data is inherently complex. The signals are non-linear, non-stationary and therefore difficult to analyze. After acquisition, pre-processing, feature extraction and dimensionality reduction is performed, after witch machine learning algorithms can be applied to classify the signals into classes, where each class corresponds to a specific intention of the user. BCI systems require correct classification of signals interpreted from the brain for useful operation. This paper reviews our proposed methods for EEG signal processing and classification, which include Wave Atom transform, use of nonlinear operators, class-adaptive denoising using Shrinkage Functions and real time training of Voted Perceptron artificial neural networks

    EEG signal processing methods for BCI applications

    Get PDF
    Abstract Brain-computer interface (BCI) is a communication system that translates brain activity into commands for a computer or other digital device. The majority of BCI systems work by reading and interpreting cortically-evoked electro-potentials ("brain waves") via an electroencephalogram (EEG) data. The EEG data is inherently complex. The signals are non-linear, non-stationary and therefore difficult to analyze. After acquisition, pre-processing, feature extraction and dimensionality reduction is performed, after witch machine learning algorithms can be applied to classify the signals into classes, where each class corresponds to a specific intention of the user. BCI systems require correct classification of signals interpreted from the brain for useful operation. This paper reviews our proposed methods for EEG signal processing and classification, which include Wave Atom transform, use of nonlinear operators, class-adaptive denoising using Shrinkage Functions and real time training of Voted Perceptron artificial neural networks

    Improved 3D MR Image Acquisition and Processing in Congenital Heart Disease

    Get PDF
    Congenital heart disease (CHD) is the most common type of birth defect, affecting about 1% of the population. MRI is an essential tool in the assessment of CHD, including diagnosis, intervention planning and follow-up. Three-dimensional MRI can provide particularly rich visualization and information. However, it is often complicated by long scan times, cardiorespiratory motion, injection of contrast agents, and complex and time-consuming postprocessing. This thesis comprises four pieces of work that attempt to respond to some of these challenges. The first piece of work aims to enable fast acquisition of 3D time-resolved cardiac imaging during free breathing. Rapid imaging was achieved using an efficient spiral sequence and a sparse parallel imaging reconstruction. The feasibility of this approach was demonstrated on a population of 10 patients with CHD, and areas of improvement were identified. The second piece of work is an integrated software tool designed to simplify and accelerate the development of machine learning (ML) applications in MRI research. It also exploits the strengths of recently developed ML libraries for efficient MR image reconstruction and processing. The third piece of work aims to reduce contrast dose in contrast-enhanced MR angiography (MRA). This would reduce risks and costs associated with contrast agents. A deep learning-based contrast enhancement technique was developed and shown to improve image quality in real low-dose MRA in a population of 40 children and adults with CHD. The fourth and final piece of work aims to simplify the creation of computational models for hemodynamic assessment of the great arteries. A deep learning technique for 3D segmentation of the aorta and the pulmonary arteries was developed and shown to enable accurate calculation of clinically relevant biomarkers in a population of 10 patients with CHD

    Algorithms and Systems for IoT and Edge Computing

    Get PDF
    The idea of distributing the signal processing along the path that starts with the acquisition and ends with the final application has given light to the Internet of Things and Edge Computing, which have demonstrated several advantages in terms of scalability, costs, and reliability. In this dissertation, we focus on designing and implementing algorithms and systems that allow performing a complex task on devices with limited resources. Firstly, we assess the trade-off between compression and anomaly detection from both a theoretical and a practical point of view. Information theory provides the rate-distortion analysis that is extended to consider how information content is processed for detection purposes. Considering an actual Structural Health Monitoring application, two corner cases are analysed: detection in high distortion based on a feature extraction method and detection with low distortion based on Principal Component Analysis. Secondly, we focus on streaming methods for Subspace Analysis. In this context, we revise and study state-of-the-art methods to target devices with limited computational resources. We also consider a real case of deployment of an algorithm for streaming Principal Component Analysis for signal compression in a Structural Health Monitoring application, discussing the trade-off between the possible implementation strategies. Finally, we focus on an alternative compression framework suited for low-end devices that is Compressed Sensing. We propose a different decoding approach that splits the recovery problem into two stages and effectively adopts a deep neural network and basic linear algebra to reconstruct biomedical signals. This novel approach outperforms the state-of-the-art in terms of quality of reconstruction and requires lower computational resources

    Size, Shape, and Spatial Distribution Analysis of Sub-Micron Hip Implant Wear Particles at Sub-Optical Resolution Using Deconvolution Methods

    Get PDF
    Total joint replacement (TJR) has long been a common and effective treatment option for individuals suffering from osteoarthritis. However, the bearing surfaces for TJR implants, generally a metal femoral head inserted into an ultra-high molecular weight polyethylene (UHMWPE) acetabular cup, are prone to wear. UHMWPE particles generated through articulating wear can contribute to the failure of implants, and have been shown to have health risks for patients. Understanding the generation and characteristics of wear particles is crucial for learning how to reduce these health risks and assess different implant materials and designs. Using a novel elliptically polarized light imaging system we employ new techniques for image acquisition, with subsequent manipulation to enhance particle analysis. Microscopy slides containing UHMWPE wear debris were prepared and imaged using a custom designed polarizing microscope. A calibration methodology was developed to model the point spread function (PSF) artifact introduced by the optical system. Deconvolution methods, based on a PSF model developed by Gibson et al. [33], were used to remove the PSF from images captured by the optical system. Optimization of the theoretical PSF produced a model that consistently correlated above 0.85, which is a threshold determined to result in reasonable improvement in resolution of image data. Particle size, shape, and spatial distribution were quantified and used to characterize the imaged particles. Statistical comparison of particle characterization before and after deconvolution, and compared against scanning electron microscopy images of the same particles, revealed a significant improvement in particle characterization. Further refinement and work could improve the packages presented in this work dramatically, offering a robust alternative to SEM analysis of wear debris. The main advantage of this new method is the ability to image UHMWPE wear particles in-situ from histology slides of relevant tissues, allowing for distribution and location data to be collected in addition to size and shape information.M.S., Biomedical Engineering -- Drexel University, 201

    Informative Data Fusion: Beyond Canonical Correlation Analysis

    Full text link
    Multi-modal data fusion is a challenging but common problem arising in fields such as economics, statistical signal processing, medical imaging, and machine learning. In such applications, we have access to multiple datasets that use different data modalities to describe some system feature. Canonical correlation analysis (CCA) is a multidimensional joint dimensionality reduction algorithm for exactly two datasets. CCA finds a linear transformation for each feature vector set such that the correlation between the two transformed feature sets is maximized. These linear transformations are easily found by solving the SVD of a matrix that only involves the covariance and cross-covariance matrices of the feature vector sets. When these covariance matrices are unknown, an empirical version of CCA substitutes sample covariance estimates formed from training data. However, when the number of training samples is less than the combined dimension of the datasets, CCA fails to reliably detect correlation between the datasets. This thesis explores the the problem of detecting correlations from data modeled by the ubiquitous signal-plus noise data model. We present a modification to CCA, which we call informative CCA (ICCA) that first projects each dataset onto a low-dimensional informative signal subspace. We verify the superior performance of ICCA on real-world datasets and argue the optimality of trim-then-fuse over fuse-then-trim correlation analysis strategies. We provide a significance test for the correlations returned by ICCA and derive improved estimates of the population canonical vectors using insights from random matrix theory. We then extend the analysis of CCA to regularized CCA (RCCA) and demonstrate that setting the regularization parameter to infinity results in the best performance and has the same solution as taking the SVD of the cross-covariance matrix of the two datasets. Finally, we apply the ideas learned from ICCA to multiset CCA (MCCA), which analyzes correlations for more than two datasets. There are multiple formulations of multiset CCA (MCCA), each using a different combination of objective function and constraint function to describe a notion of multiset correlation. We consider MAXVAR, provide an informative version of the algorithm, which we call informative MCCA (IMCCA), and demonstrate its superiority on a real-world dataset.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113419/1/asendorf_1.pd

    Six Decades of Flight Research: An Annotated Bibliography of Technical Publications of NASA Dryden Flight Research Center, 1946-2006

    Get PDF
    Titles, authors, report numbers, and abstracts are given for nearly 2900 unclassified and unrestricted technical reports and papers published from September 1946 to December 2006 by the NASA Dryden Flight Research Center and its predecessor organizations. These technical reports and papers describe and give the results of 60 years of flight research performed by the NACA and NASA, from the X-1 and other early X-airplanes, to the X-15, Space Shuttle, X-29 Forward Swept Wing, X-31, and X-43 aircraft. Some of the other research airplanes tested were the D-558, phase 1 and 2; M-2, HL-10 and X-24 lifting bodies; Digital Fly-By-Wire and Supercritical Wing F-8; XB-70; YF-12; AFTI F-111 TACT and MAW; F-15 HiDEC; F-18 High Alpha Research Vehicle, F-18 Systems Research Aircraft and the NASA Landing Systems Research aircraft. The citations of reports and papers are listed in chronological order, with author and aircraft indices. In addition, in the appendices, citations of 270 contractor reports, more than 200 UCLA Flight System Research Center reports, nearly 200 Tech Briefs, 30 Dryden Historical Publications, and over 30 videotapes are included
    corecore