192 research outputs found

    Rotor fault classification technique and precision analysis with kernel principal component analysis and multi-support vector machines

    Get PDF
    To solve the diagnosis problem of fault classification for aero-engine vibration over standard during test, a fault diagnosis classification approach based on kernel principal component analysis (KPCA) feature extraction and multi-support vector machines (SVM) is proposed, which extracted the feature of testing cell standard fault samples through exhausting the capability of nonlinear feature extraction of KPCA. By computing inner product kernel functions of original feature space, the vibration signal of rotor is transformed from principal low dimensional feature space to high dimensional feature spaces by this nonlinear map. Then, the nonlinear principal components of original low dimensional space are obtained by performing PCA on the high dimensional feature spaces. During muti-SVM training period, as eigenvectors, the nonlinear principal components are separated into training set and test set, and penalty parameter and kernel function parameter are optimized by adopting genetic optimization algorithm. A high classification accuracy of training set and test set is sustained and over-fitting and under-fitting are avoided. Experiment results indicate that this method has good performance in distinguishing different aero-engine fault mode, and is suitable for fault recognition of a high speed rotor

    Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods

    Full text link
    Feature extraction and dimensionality reduction are important tasks in many fields of science dealing with signal processing and analysis. The relevance of these techniques is increasing as current sensory devices are developed with ever higher resolution, and problems involving multimodal data sources become more common. A plethora of feature extraction methods are available in the literature collectively grouped under the field of Multivariate Analysis (MVA). This paper provides a uniform treatment of several methods: Principal Component Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis (CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions derived by means of the theory of reproducing kernel Hilbert spaces. We also review their connections to other methods for classification and statistical dependence estimation, and introduce some recent developments to deal with the extreme cases of large-scale and low-sized problems. To illustrate the wide applicability of these methods in both classification and regression problems, we analyze their performance in a benchmark of publicly available data sets, and pay special attention to specific real applications involving audio processing for music genre prediction and hyperspectral satellite images for Earth and climate monitoring

    Vibration Monitoring of Gas Turbine Engines: Machine-Learning Approaches and Their Challenges

    Get PDF
    In this study, condition monitoring strategies are examined for gas turbine engines using vibration data. The focus is on data-driven approaches, for this reason a novelty detection framework is considered for the development of reliable data-driven models that can describe the underlying relationships of the processes taking place during an engine’s operation. From a data analysis perspective, the high dimensionality of features extracted and the data complexity are two problems that need to be dealt with throughout analyses of this type. The latter refers to the fact that the healthy engine state data can be non-stationary. To address this, the implementation of the wavelet transform is examined to get a set of features from vibration signals that describe the non-stationary parts. The problem of high dimensionality of the features is addressed by “compressing” them using the kernel principal component analysis so that more meaningful, lowerdimensional features can be used to train the pattern recognition algorithms. For feature discrimination, a novelty detection scheme that is based on the one-class support vector machine (OCSVM) algorithm is chosen for investigation. The main advantage, when compared to other pattern recognition algorithms, is that the learning problem is being cast as a quadratic program. The developed condition monitoring strategy can be applied for detecting excessive vibration levels that can lead to engine component failure. Here, we demonstrate its performance on vibration data from an experimental gas turbine engine operating on different conditions. Engine vibration data that are designated as belonging to the engine’s “normal” condition correspond to fuels and airto-fuel ratio combinations, in which the engine experienced low levels of vibration. Results demonstrate that such novelty detection schemes can achieve a satisfactory validation accuracy through appropriate selection of two parameters of the OCSVM, the kernel width γ and optimization penalty parameter ν. This selection was made by searching along a fixed grid space of values and choosing the combination that provided the highest cross-validation accuracy. Nevertheless, there exist challenges that are discussed along with suggestions for future work that can be used to enhance similar novelty detection schemes

    Improved RBF Network Intrusion Detection Model Based on Edge Computing with Multi-algorithm Fusion

    Get PDF
    Edge computing is difficult to deploy a complete and reliable security strategy due to its distributed computing architecture and inherent heterogeneity of equipment and limited resources. When malicious attacks occur, the loss will be immeasurable. RBF neural network has strong nonlinear representation ability and fast learning convergence speed, which is suitable for intrusion detection of edge detection industrial control network. In this paper, an improved RBF network intrusion detection model based on multi-algorithm fusion is proposed. kernel principal component analysis (KPCA) is used to extract data dimension and simplify data representation. Then subtractive clustering algorithm(SCM) and grey wolf algorithm(GWO) are used to jointly optimize RBF neural network parameters to avoid falling into local optimum, reduce the calculation of model training and improve the detection accuracy. The algorithm can better adapt to the edge computing platform with weak computing ability and bearing capacity, and realize real-time data analysis.The experimental results of BATADAL data set and Gas data set show that the accuracy of the algorithm is over 99% and the training time of larger samples is shortened by 50 times for BATADAL data set. The results show that the improved RBF network is effective in improving the convergence speed and accuracy in intrusion detection

    Multivariate statistical process monitoring

    Get PDF
    U industrijskoj proizvodnji prisutan je stalni rast zahtjeva, u prvom redu, u pogledu ekonomičnosti proizvodnje, kvalitete proizvoda, stupnja sigurnosti i zaštite okoliša. Put ka ispunjenju ovih zahtjeva vodi kroz uvođenje sve složenijih sustava automatskog upravljanja, što ima za posljedicu mjerenje sve većeg broja procesnih veličina i sve složenije mjerne sustave. Osnova za kvalitetno vođenje procesa je kvalitetno i pouzdano mjerenje procesnih veličina. Kvar na procesnoj opremi može značajno narušiti proizvodni proces, pa čak prouzrokovati ispad proizvodnje što rezultira visokim dodatnim troškovima. U ovom radu se analizira način automatskog otkrivanja kvara i identifikacije mjesta kvara u procesnoj mjernoj opremi, tj. senzorima. U ovom smislu mogu poslužiti različite statističke metode kojima se analiziraju podaci koji pristižu iz mjernog sustava. U radu se PCA i ICA metode koriste za modeliranje odnosa među procesnim veličinama, dok se za otkrivanje nastanka kvara koriste Hotellingova (T**2), I**2 i Q (SPE) statistike jer omogućuju otkrivanje neobičnih varijabilnosti unutar i izvan normalnog radnog područja procesa. Za identifikaciju mjesta (uzroka) kvara koriste se dijagrami doprinosa. Izvedeni algoritmi statističkog nadzora procesa temeljeni na PCA metodi i ICA metodi primijenjeni su na dva procesa različite složenosti te je uspoređena njihova sposobnost otkrivanja kvara.Demands regarding production efficiency, product quality, safety levels and environment protection are continuously increasing in the process industry. The way to accomplish these demands is to introduce ever more complex automatic control systems which require more process variables to be measured and more advanced measurement systems. Quality and reliable measurements of process variables are the basis for the quality process control. Process equipment failures can significantly deteriorate production process and even cause production outage, resulting in high additional costs. This paper analyzes automatic fault detection and identification of process measurement equipment, i.e. sensors. Different statistical methods can be used for this purpose in a way that continuously acquired measurements are analyzed by these methods. In this paper, PCA and ICA methods are used for relationship modelling which exists between process variables while Hotelling\u27s (T**2), I**2 and Q (SPE) statistics are used for fault detection because they provide an indication of unusual variability within and outside normal process workspace. Contribution plots are used for fault identification. The algorithms for the statistical process monitoring based on PCA and ICA methods are derived and applied to the two processes of different complexity. Apart from that, their fault detection ability is mutually compared

    Person Re-Identification Techniques for Intelligent Video Surveillance Systems

    Get PDF
    Nowadays, intelligent video-surveillance is one of the most active research fields in com- puter vision and machine learning techniques which provides useful tools for surveillance operators and forensic video investigators. Person re-identification is among these tools; it consists of recognizing whether an individual has already been observed over a network of cameras. This tool can also be employed in various possible applications, e.g., off-line retrieval of all the video-sequences showing an individual of interest whose image is given as query, or on-line pedestrian tracking over multiple cameras. For the off-line retrieval applications, one of the goals of person re-identification systems is to support video surveillance operators and forensic investigators to find an individual of interest in videos acquired by a network of non-overlapping cameras. This is attained by sorting images of previously ob- served individuals for decreasing values of their similarity with a given probe individual. This task is typically achieved by exploiting the clothing appearance, in which a classical biometric methods like the face recognition is impeded to be practical in real-world video surveillance scenarios, because of low-quality of acquired images. Existing clothing appearance descriptors, together with their similarity measures, are mostly aimed at im- proving ranking quality. These methods usually are employed as part-based body model in order to extract image signature that might be independently treated in different body parts (e.g. torso and legs). Whereas, it is a must that a re-identification model to be robust and discriminate on individual of interest recognition, the issue of the processing time might also be crucial in terms of tackling this task in real-world scenarios. This issue can be also seen from two different point of views such as processing time to construct a model (aka descriptor generation); which usually can be done off-line, and processing time to find the correct individual from bunch of acquired video frames (aka descriptor matching); which is the real-time procedure of the re-identification systems. This thesis addresses the issue of processing time for descriptor matching, instead of im- proving ranking quality, which is also relevant in practical applications involving interaction with human operators. It will be shown how a trade-off between processing time and rank- ing quality, for any given descriptor, can be achieved through a multi-stage ranking approach inspired by multi-stage approaches to classification problems presented in pattern recogni- tion area, which it is further adapting to the re-identification task as a ranking problem. A discussion of design criteria is therefore presented as so-called multi-stage re-identification systems, and evaluation of the proposed approach carry out on three benchmark data sets, using four state-of-the-art descriptors. Additionally, by concerning to the issue of processing time, typical dimensional reduction methods are studied in terms of reducing the processing time of a descriptor where a high-dimensional feature space is generated by a specific person re-identification descriptor. An empirically experimental result is also presented in this case, and three well-known feature reduction methods are applied them on two state-of-the-art descriptors on two benchmark data sets
    corecore