1,366 research outputs found

    Development of new fault detection methods for rotating machines (roller bearings)

    Get PDF
    Abstract Early fault diagnosis of roller bearings is extremely important for rotating machines, especially for high speed, automatic and precise machines. Many research efforts have been focused on fault diagnosis and detection of roller bearings, since they constitute one the most important elements of rotating machinery. In this study a combination method is proposed for early damage detection of roller bearing. Wavelet packet transform (WPT) is applied to the collected data for denoising and the resulting clean data are break-down into some elementary components called Intrinsic mode functions (IMFs) using Ensemble empirical mode decomposition (EEMD) method. The normalized energy of three first IMFs are used as input for Support vector machine (SVM) to recognize whether signals are sorting out from healthy or faulty bearings. Then, since there is no robust guide to determine amplitude of added noise in EEMD technique, a new Performance improved EEMD (PIEEMD) is proposed to determine the appropriate value of added noise. A novel feature extraction method is also proposed for detecting small size defect using Teager-Kaiser energy operator (TKEO). TKEO is applied to IMFs obtained to create new feature vectors as input data for one-class SVM. The results of applying the method to acceleration signals collected from an experimental bearing test rig demonstrated that the method can be successfully used for early damage detection of roller bearings. Most of the diagnostic methods that have been developed up to now can be applied for the case stationary working conditions only (constant speed and load). However, bearings often work at time-varying conditions such as wind turbine supporting bearings, mining excavator bearings, vehicles, robots and all processes with run-up and run-down transients. Damage identification for bearings working under non-stationary operating conditions, especially for early/small defects, requires the use of appropriate techniques, which are generally different from those used for the case of stationary conditions, in order to extract fault-sensitive features which are at the same time insensitive to operational condition variations. Some methods have been proposed for damage detection of bearings working under time-varying speed conditions. However, their application might increase the instrumentation cost because of providing a phase reference signal. Furthermore, some methods such as order tracking methods still can be applied when the speed variation is limited. In this study, a novel combined method based on cointegration is proposed for the development of fault features which are sensitive to the presence of defects while in the same time they are insensitive to changes in the operational conditions. It does not require any additional measurements and can identify defects even for considerable speed variations. The signals acquired during run-up condition are decomposed into IMFs using the performance improved EEMD method. Then, the cointegration method is applied to the intrinsic mode functions to extract stationary residuals. The feature vectors are created by applying the Teager-Kaiser energy operator to the obtained stationary residuals. Finally, the feature vectors of the healthy bearing signals are utilized to construct a separating hyperplane using one-class support vector machine. Eventually the proposed method was applied to vibration signals measured on an experimental bearing test rig. The results verified that the method can successfully distinguish between healthy and faulty bearings even if the shaft speed changes dramatically

    Hand (Motor) Movement Imagery Classification of EEG Using Takagi-Sugeno-Kang Fuzzy-Inference Neural Network

    Get PDF
    Approximately 20 million people in the United States suffer from irreversible nerve damage and would benefit from a neuroprosthetic device modulated by a Brain-Computer Interface (BCI). These devices restore independence by replacing peripheral nervous system functions such as peripheral control. Although there are currently devices under investigation, contemporary methods fail to offer adaptability and proper signal recognition for output devices. Human anatomical differences prevent the use of a fixed model system from providing consistent classification performance among various subjects. Furthermore, notoriously noisy signals such as Electroencephalography (EEG) require complex measures for signal detection. Therefore, there remains a tremendous need to explore and improve new algorithms. This report investigates a signal-processing model that is better suited for BCI applications because it incorporates machine learning and fuzzy logic. Whereas traditional machine learning techniques utilize precise functions to map the input into the feature space, fuzzy-neuro system apply imprecise membership functions to account for uncertainty and can be updated via supervised learning. Thus, this method is better equipped to tolerate uncertainty and improve performance over time. Moreover, a variation of this algorithm used in this study has a higher convergence speed. The proposed two-stage signal-processing model consists of feature extraction and feature translation, with an emphasis on the latter. The feature extraction phase includes Blind Source Separation (BSS) and the Discrete Wavelet Transform (DWT), and the feature translation stage includes the Takagi-Sugeno-Kang Fuzzy-Neural Network (TSKFNN). Performance of the proposed model corresponds to an average classification accuracy of 79.4 % for 40 subjects, which is higher than the standard literature values, 75%, making this a superior model

    Wavelet transform-based de-noising for two-photon imaging of synaptic Ca2+ transients.

    Get PDF
    PublishedJournal ArticleResearch Support, Non-U.S. Gov'tThis is an open access article.Postsynaptic Ca(2+) transients triggered by neurotransmission at excitatory synapses are a key signaling step for the induction of synaptic plasticity and are typically recorded in tissue slices using two-photon fluorescence imaging with Ca(2+)-sensitive dyes. The signals generated are small with very low peak signal/noise ratios (pSNRs) that make detailed analysis problematic. Here, we implement a wavelet-based de-noising algorithm (PURE-LET) to enhance signal/noise ratio for Ca(2+) fluorescence transients evoked by single synaptic events under physiological conditions. Using simulated Ca(2+) transients with defined noise levels, we analyzed the ability of the PURE-LET algorithm to retrieve the underlying signal. Fitting single Ca(2+) transients with an exponential rise and decay model revealed a distortion of τ(rise) but improved accuracy and reliability of τ(decay) and peak amplitude after PURE-LET de-noising compared to raw signals. The PURE-LET de-noising algorithm also provided a ∼30-dB gain in pSNR compared to ∼16-dB pSNR gain after an optimized binomial filter. The higher pSNR provided by PURE-LET de-noising increased discrimination accuracy between successes and failures of synaptic transmission as measured by the occurrence of synaptic Ca(2+) transients by ∼20% relative to an optimized binomial filter. Furthermore, in comparison to binomial filter, no optimization of PURE-LET de-noising was required for reducing arbitrary bias. In conclusion, the de-noising of fluorescent Ca(2+) transients using PURE-LET enhances detection and characterization of Ca(2+) responses at central excitatory synapses.C.M.T. and J.R.M. were supported by the Wellcome Trust, and K.T.-A. was supported by grant No. EP/I018638/1 from the Engineering and Physical Sciences Research Council

    A hybrid unsupervised approach toward EEG epileptic spikes detection

    Get PDF
    Epileptic spikes are complementary sources of information in EEG to diagnose and localize the origin of epilepsy. However, not only is visual inspection of EEG labor intensive, time consuming, and prone to human error, but it also needs long-term training to acquire the level of skill required for identifying epileptic discharges. Therefore, computer-aided approaches were employed for the purpose of saving time and increasing the detection and source localization accuracy. One of the most important artifacts that may be confused as an epileptic spike, due to morphological resemblance, is eye blink. Only a few studies consider removal of this artifact prior to detection, and most of them used either visual inspection or computer-aided approaches, which need expert supervision. Consequently, in this paper, an unsupervised and EEG-based system with embedded eye blink artifact remover is developed to detect epileptic spikes. The proposed system includes three stages: eye blink artifact removal, feature extraction, and classification. Wavelet transform was employed for both artifact removal and feature extraction steps, and adaptive neuro-fuzzy inference system for classification purpose. The proposed method is verified using a publicly available EEG dataset. The results show the efficiency of this algorithm in detecting epileptic spikes using low-resolution EEG with least computational complexity, highest sensitivity, and lesser human interaction compared to similar studies. Moreover, since epileptic spike detection is a vital component of epilepsy source localization, therefore this algorithm can be utilized for EEG-based pre-surgical evaluation of epilepsy

    Fault Diagnosis of Rotating Equipment Bearing Based on EEMD and Improved Sparse Representation Algorithm

    Get PDF
    Aiming at the problem that the vibration signals of rolling bearings working in a harsh environment are mixed with many harmonic components and noise signals, while the traditional sparse representation algorithm takes a long time to calculate and has a limited accuracy, a bearing fault feature extraction method based on the ensemble empirical mode decomposition (EEMD) algorithm and improved sparse representation is proposed. Firstly, an improved orthogonal matching pursuit (adapOMP) algorithm is used to separate the harmonic components in the signal to obtain the filtered signal. The processed signal is decomposed by EEMD, and the signal with a kurtosis greater than three is reconstructed. Then, Hankel matrix transformation is carried out to construct the learning dictionary. The K-singular value decomposition (K-SVD) algorithm using the improved termination criterion makes the algorithm have a certain adaptability, and the reconstructed signal is constructed by processing the EEMD results. Through the comparative analysis of the three methods under strong noise, although the K-SVD algorithm can produce good results after being processed by the adapOMP algorithm, the effect of the algorithm is not obvious in the low-frequency range. The method proposed in this paper can effectively extract the impact component from the signal. This will have a positive effect on the extraction of rotating machinery impact features in complex noise environments

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    A Channel Ranking And Selection Scheme Based On Channel Occupancy And SNR For Cognitive Radio Systems

    Get PDF
    Wireless networks and information traffic have grown exponentially over the last decade. Consequently, an increase in demand for radio spectrum frequency bandwidth has resulted. Recent studies have shown that with the current fixed spectrum allocation (FSA), radio frequency band utilization ranges from 15% to 85%. Therefore, there are spectrum holes that are not utilized all the time by the licensed users, and, thus the radio spectrum is inefficiently exploited. To solve the problem of scarcity and inefficient utilization of the spectrum resources, dynamic spectrum access has been proposed as a solution to enable sharing and using available frequency channels. With dynamic spectrum allocation (DSA), unlicensed users can access and use licensed, available channels when primary users are not transmitting. Cognitive Radio technology is one of the next generation technologies that will allow efficient utilization of spectrum resources by enabling DSA. However, dynamic spectrum allocation by a cognitive radio system comes with the challenges of accurately detecting and selecting the best channel based on the channelâs availability and quality of service. Therefore, the spectrum sensing and analysis processes of a cognitive radio system are essential to make accurate decisions. Different spectrum sensing techniques and channel selection schemes have been proposed. However, these techniques only consider the spectrum occupancy rate for selecting the best channel, which can lead to erroneous decisions. Other communication parameters, such as the Signal-to-Noise Ratio (SNR) should also be taken into account. Therefore, the spectrum decision-making process of a cognitive radio system must use techniques that consider spectrum occupancy and channel quality metrics to rank channels and select the best option. This thesis aims to develop a utility function based on spectrum occupancy and SNR measurements to model and rank the sensed channels. An evolutionary algorithm-based SNR estimation technique was developed, which enables adaptively varying key parameters of the existing Eigenvalue-based blind SNR estimation technique. The performance of the improved technique is compared to the existing technique. Results show the evolutionary algorithm-based estimation performing better than the existing technique. The utility-based channel ranking technique was developed by first defining channel utility function that takes into account SNR and spectrum occupancy. Different mathematical functions were investigated to appropriately model the utility of SNR and spectrum occupancy rate. A ranking table is provided with the utility values of the sensed channels and compared with the usual occupancy rate based channel ranking. According to the results, utility-based channel ranking provides a better scope of making an informed decision by considering both channel occupancy rate and SNR. In addition, the efficiency of several noise cancellation techniques was investigated. These techniques can be employed to get rid of the impact of noise on the received or sensed signals during spectrum sensing process of a cognitive radio system. Performance evaluation of these techniques was done using simulations and the results show that the evolutionary algorithm-based noise cancellation techniques, particle swarm optimization and genetic algorithm perform better than the regular gradient descent based technique, which is the least-mean-square algorithm

    Prognostic Approaches Using Transient Monitoring Methods

    Get PDF
    The utilization of steady state monitoring techniques has become an established means of providing diagnostic and prognostic information regarding both systems and equipment. However, steady state data is not the only, or in some cases, even the best source of information regarding the health and state of a system. Transient data has largely been overlooked as a source of system information due to the additional complexity in analyzing these types of signals. The development for algorithms and techniques to quickly, and intuitively develop generic quantification of deviations a transient signal towards the goal of prognostic predictions has until now, largely been overlooked. By quantifying and trending these shifts, an accurate measure of system heath can be established and utilized by prognostic algorithms. In fact, for some systems the elevated stress levels during transients can provide better, more clear indications of system health than those derived from steady state monitoring. This research is based on the hypothesis that equipment health signals for some failure modes are stronger during transient conditions than during steady-state because transient conditions (e.g. start-up) place greater stress on the equipment for these failure modes. From this it follows that these signals related to the system or equipment health would display more prominent indications of abnormality if one were to know the proper means to identify them. This project seeks to develop methods and conceptual models to monitor transient signals for equipment health. The purpose of this research is to assess if monitoring of transient signals could provide alternate or better indicators of incipient equipment failure prior to steady state signals. The project is focused on identifying methods, both traditional and novel, suitable to implement and test transient model monitoring in both an useful and intuitive way. By means of these techniques, it is shown that the addition information gathered during transient portions of life can be used to either to augment existing steady-state information, or in cases where such information is unavailable, be used as a primary means of developing prognostic models
    • …
    corecore