1,038 research outputs found

    Signal De-noising method based on particle swarm algorithm and Wavelet transform

    Get PDF
    Wavelet analiza je novi alat za analizu odnosa vrijeme-frekvencija, razvijen na temelju Fourierove analize s dobrim svojstvom lokaliziranja vremena i frekvencije i mogućnosti donošenja višestrukih rješenja. Koristi se u cijelom nizu primjena u području obrade signala. U ovom se radu analizira primjena wavelet transforma u filtriranju signala korištenjem poboljšane optimalizacije roja čestica i predlaže inteligentna metoda uklanjanja šuma iz signala zasnovana na wavelet analizi. Metoda koristi Center Based Particle Swarm Algorithm (CBPSO) za izbor optimalnog praga za svaki pod-pojas u različitim mjerilima, inteligentno razaznavajući vrstu šuma iz samog signala, što ne zahtijeva nikakvo prethodno poznavanje šuma. Poboljšani algoritam roja čestica koristi se da potakne optimalni izbor različitih mjerila praga wavelet domena, što je dovelo do uklanjanja šuma iz signala kod različitih tipova pozadinskog šuma, i povećane brzine wavelet transforma i wavelet konstrukcije te ima veću fleksibilnost. Eksperimentalni rezultati su pokazali da se CBPSO algoritmom može postići bolji učinak uklanjanja šuma.Wavelet analysis is a new time-frequency analysis tool developed on the basis of Fourier analysis with good time-frequency localization property and multi-resolution characteristics, which is in a wide range of applications in the field of signal processing. This paper studies the application of wavelet transform in signal filtering, by using an improved particle swarm optimization, proposes an intelligent signal de-noising method based on wavelet analysis. The method uses a Center Based Particle Swarm Algorithm (CBPSO) to select the optimal threshold for each sub-band in different scales, learning the type of noise from the signal itself intelligently, which does not require any prior knowledge of the noise. The improved particle swarm algorithm is used to enhance the optimal choice of the different scales of the wavelet domain threshold, which realized the signal De-noising under different types of noise background, and improved the speed of wavelet transform and wavelet construction, and has greater flexibility. The experimental results showed that CBPSO algorithm can get better De-noising effect

    SoFiA: a flexible source finder for 3D spectral line data

    Get PDF
    We introduce SoFiA, a flexible software application for the detection and parameterization of sources in 3D spectral-line datasets. SoFiA combines for the first time in a single piece of software a set of new source-finding and parameterization algorithms developed on the way to future HI surveys with ASKAP (WALLABY, DINGO) and APERTIF. It is designed to enable the general use of these new algorithms by the community on a broad range of datasets. The key advantages of SoFiA are the ability to: search for line emission on multiple scales to detect 3D sources in a complete and reliable way, taking into account noise level variations and the presence of artefacts in a data cube; estimate the reliability of individual detections; look for signal in arbitrarily large data cubes using a catalogue of 3D coordinates as a prior; provide a wide range of source parameters and output products which facilitate further analysis by the user. We highlight the modularity of SoFiA, which makes it a flexible package allowing users to select and apply only the algorithms useful for their data and science questions. This modularity makes it also possible to easily expand SoFiA in order to include additional methods as they become available. The full SoFiA distribution, including a dedicated graphical user interface, is publicly available for download.Comment: MNRAS, accepted. SoFiA is registered at the Astrophysics Source Code Library with ID ascl:1412.001. Download SoFiA at https://github.com/SoFiA-Admin/SoFi

    Statistical Properties and Applications of Empirical Mode Decomposition

    Get PDF
    Signal analysis is key to extracting information buried in noise. The decomposition of signal is a data analysis tool for determining the underlying physical components of a processed data set. However, conventional signal decomposition approaches such as wavelet analysis, Wagner-Ville, and various short-time Fourier spectrograms are inadequate to process real world signals. Moreover, most of the given techniques require \emph{a prior} knowledge of the processed signal, to select the proper decomposition basis, which makes them improper for a wide range of practical applications. Empirical Mode Decomposition (EMD) is a non-parametric and adaptive basis driver that is capable of breaking-down non-linear, non-stationary signals into an intrinsic and finite components called Intrinsic Mode Functions (IMF). In addition, EMD approximates a dyadic filter that isolates high frequency components, e.g. noise, in higher index IMFs. Despite of being widely used in different applications, EMD is an ad hoc solution. The adaptive performance of EMD comes at the expense of formulating a theoretical base. Therefore, numerical analysis is usually adopted in literature to interpret the behavior. This dissertation involves investigating statistical properties of EMD and utilizing the outcome to enhance the performance of signal de-noising and spectrum sensing systems. The novel contributions can be broadly summarized in three categories: a statistical analysis of the probability distributions of the IMFs and a suggestion of Generalized Gaussian distribution (GGD) as a best fit distribution; a de-noising scheme based on a null-hypothesis of IMFs utilizing the unique filter behavior of EMD; and a novel noise estimation approach that is used to shift semi-blind spectrum sensing techniques into fully-blind ones based on the first IMF. These contributions are justified statistically and analytically and include comparison with other state of art techniques

    Improved anti-noise attack ability of image encryption algorithm using de-noising technique

    Get PDF
    Information security is considered as one of the important issues in the information age used to preserve the secret information through out transmissions in practical applications. With regard to image encryption, a lot of schemes related to information security were applied. Such approaches might be categorized into 2 domains; domain frequency and domain spatial. The presented work develops an encryption technique on the basis of conventional watermarking system with the use of singular value decomposition (SVD), discrete cosine transform (DCT), and discrete wavelet transform (DWT) together, the suggested DWT-DCT-SVD method has high robustness in comparison to the other conventional approaches and enhanced approach for having high robustness against Gaussian noise attacks with using denoising approach according to DWT. MSE in addition to the peak signal-to-noise ratio (PSNR) specified the performance measures which are the base of this study’s results, as they are showing that the algorithm utilized in this study has high robustness against Gaussian noise attacks

    Wavelet Analysis and Neural Networks for Bearing Fault Diagnosis

    Get PDF

    A Demand-Controlled Application of Deep Brain Stimulation with a Portable Neurostimulator

    Get PDF
    Deep brain stimulation (DBS) is an electrical therapy for several advanced neurological disorders such as Parkinson’s disease (PD). The improvement of current DBS technique has gained more importance in order to increase the therapeutic benefits and reduce the side effects. The electrical stimulation of the deep brain is administered at standard high-frequency (HF) or by using dedicated patterns intending the modulation of pathological neuronal activity. Apart from the development of new stimulation protocols such as the desynchronizing coordinated reset (CR) DBS protocol, a continuous and appropriate adjustment of the stimulation parameters might increase the efficacy of DBS. Therapeutic benefits could be maximized by a system that automatically detects the demand for further stimulation and continuously modifies the stimulation parameters. For instance, such a system can help the clinician to find the optimal parameters easily, without lengthy test procedures. In this thesis, the technical realization of ademand-controlled application of CR DBS for Parkinson’s disease (PD) with a portable neurostimulator is investigated. The applicability of such an autonomic system is studied retrospectively using local field potential (LFP) and resting tremor recordings from PD patients, as well as LFP recordings from parkinsonian non-human primates were used. A demand-controlled application of DBS requires a real-time analysis of the on-going pathological activity. LFP recordings during DBS are normally contaminated by strong artifacts that are caused by technical drawbacks. The artifacts inhibit the examination of the presence of pathological activity.Therefore, software-based technical solutions for artifact reduction were developed and implemented to obtain clean feedback signals from the recordings. Additional tests were performed on LFP recordings in saline solution to evaluate the performance of the implemented algorithms. Results obtained from these tests indicated the efficacy of the algorithms in removing most of the artifacts. Furthermore, the biological recordings were analyzed to find biomarkers of the pathological activity that can be used as a criterion to quantify the demand for tuning the stimulation parameters. The proposed approach aims to analyze and monitor the variation in the strength of such pathological activities. A demonstration of the tuning of several HF and CR DBS parameters was performed in real-time with a digital signal processor (DSP) board using tremor-like recordings collected from a healthy subject. This thesis explores and demonstrates the design and implementation of a demand-controlled application of DBS that is adapted according to the pathological fluctuations of patients and can be more efficacious than standard continuous DBS technique

    Towards the text compression based feature extraction in high impedance fault detection

    Get PDF
    High impedance faults of medium voltage overhead lines with covered conductors can be identified by the presence of partial discharges. Despite it is a subject of research for more than 60 years, online partial discharges detection is always a challenge, especially in environment with heavy background noise. In this paper, a new approach for partial discharge pattern recognition is presented. All results were obtained on data, acquired from real 22 kV medium voltage overhead power line with covered conductors. The proposed method is based on a text compression algorithm and it serves as a signal similarity estimation, applied for the first time on partial discharge pattern. Its relevancy is examined by three different variations of classification model. The improvement gained on an already deployed model proves its quality.Web of Science1211art. no. 214

    A data analytics approach to gas turbine prognostics and health management

    Get PDF
    As a consequence of the recent deregulation in the electrical power production industry, there has been a shift in the traditional ownership of power plants and the way they are operated. To hedge their business risks, the many new private entrepreneurs enter into long-term service agreement (LTSA) with third parties for their operation and maintenance activities. As the major LTSA providers, original equipment manufacturers have invested huge amounts of money to develop preventive maintenance strategies to minimize the occurrence of costly unplanned outages resulting from failures of the equipments covered under LTSA contracts. As a matter of fact, a recent study by the Electric Power Research Institute estimates the cost benefit of preventing a failure of a General Electric 7FA or 9FA technology compressor at 10to10 to 20 million. Therefore, in this dissertation, a two-phase data analytics approach is proposed to use the existing monitoring gas path and vibration sensors data to first develop a proactive strategy that systematically detects and validates catastrophic failure precursors so as to avoid the failure; and secondly to estimate the residual time to failure of the unhealthy items. For the first part of this work, the time-frequency technique of the wavelet packet transforms is used to de-noise the noisy sensor data. Next, the time-series signal of each sensor is decomposed to perform a multi-resolution analysis to extract its features. After that, the probabilistic principal component analysis is applied as a data fusion technique to reduce the number of the potentially correlated multi-sensors measurement into a few uncorrelated principal components. The last step of the failure precursor detection methodology, the anomaly detection decision, is in itself a multi-stage process. The obtained principal components from the data fusion step are first combined into a one-dimensional reconstructed signal representing the overall health assessment of the monitored systems. Then, two damage indicators of the reconstructed signal are defined and monitored for defect using a statistical process control approach. Finally, the Bayesian evaluation method for hypothesis testing is applied to a computed threshold to test for deviations from the healthy band. To model the residual time to failure, the anomaly severity index and the anomaly duration index are defined as defects characteristics. Two modeling techniques are investigated for the prognostication of the survival time after an anomaly is detected: the deterministic regression approach, and parametric approximation of the non-parametric Kaplan-Meier plot estimator. It is established that the deterministic regression provides poor prediction estimation. The non parametric survival data analysis technique of the Kaplan-Meier estimator provides the empirical survivor function of the data set comprised of both non-censored and right censored data. Though powerful because no a-priori predefined lifetime distribution is made, the Kaplan-Meier result lacks the flexibility to be transplanted to other units of a given fleet. The parametric analysis of survival data is performed with two popular failure analysis distributions: the exponential distribution and the Weibull distribution. The conclusion from the parametric analysis of the Kaplan-Meier plot is that the larger the data set, the more accurate is the prognostication ability of the residual time to failure model.PhDCommittee Chair: Mavris, Dimitri; Committee Member: Jiang, Xiaomo; Committee Member: Kumar, Virendra; Committee Member: Saleh, Joseph; Committee Member: Vittal, Sameer; Committee Member: Volovoi, Vital
    corecore