9 research outputs found

    Information theoretic novelty detection

    Get PDF
    We present a novel approach to online change detection problems when the training sample size is small. The proposed approach is based on estimating the expected information content of a new data point and allows an accurate control of the false positive rate even for small data sets. In the case of the Gaussian distribution, our approach is analytically tractable and closely related to classical statistical tests. We then propose an approximation scheme to extend our approach to the case of the mixture of Gaussians. We evaluate extensively our approach on synthetic data and on three real benchmark data sets. The experimental validation shows that our method maintains a good overall accuracy, but significantly improves the control over the false positive rate

    Book reports

    Get PDF

    Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    Get PDF
    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases

    Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra

    No full text
    A system has been developed to extract diagnostic information from jet engine carcass vibration data. Support Vector Machines applied to novelty detection provide a measure of how unusual the shape of a vibration signature is, by learning a representation of normality. We describe a novel method for Support Vector Machines of including information from a second class for novelty detection and give results from the application to Jet Engine vibration analysis

    Detecting abnormalities in aircraft flight data and ranking their impact on the flight

    Get PDF
    To the best of the author’s knowledge, this is one of the first times that a large quantity of flight data has been studied in order to improve safety. A two phase novelty detection approach to locating abnormalities in the descent phase of aircraft flight data is presented. It has the ability to model normal time series data by analysing snapshots at chosen heights in the descent, weight individual abnormalities and quantitatively assess the overall level of abnormality of a flight during the descent. The approach expands on a recommendation by the UK Air Accident Investigation Branch to the UK Civil Aviation Authority. The first phase identifies and quantifies abnormalities at certain heights in a flight. The second phase ranks all flights to identify the most abnormal; each phase using a one class classifier. For both the first and second phases, the Support Vector Machine (SVM), the Mixture of Gaussians and the K-means one class classifiers are compared. The method is tested using a dataset containing manually labelled abnormal flights. The results show that the SVM provides the best detection rates and that the approach identifies unseen abnormalities with a high rate of accuracy. Furthermore, the method outperforms the event based approach currently in use. The feature selection tool F-score is used to identify differences between the abnormal and normal datasets. It identifies the heights where the discrimination between the two sets is largest and the aircraft parameters most responsible for these variations.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Kernel learning approaches for image classification

    Get PDF
    This thesis extends the use of kernel learning techniques to specific problems of image classification. Kernel learning is a paradigm in the eld of machine learning that generalizes the use of inner products to compute similarities between arbitrary objects. In image classification one aims to separate images based on their visual content. We address two important problems that arise in this context: learning with weak label information and combination of heterogeneous data sources. The contributions we report on are not unique to image classification, and apply to a more general class of problems. We study the problem of learning with label ambiguity in the multiple instance learning framework. We discuss several different image classification scenarios that arise in this context and argue that the standard multiple instance learning requires a more detailed disambiguation. Finally we review kernel learning approaches proposed for this problem and derive a more efficcient algorithm to solve them. The multiple kernel learning framework is an approach to automatically select kernel parameters. We extend it to its infinite limit and present an algorithm to solve the resulting problem. This result is then applied in two directions. We show how to learn kernels that adapt to the special structure of images. Finally we compare different ways of combining image features for object classification and present significant improvements compared to previous methods.In dieser Dissertation verwenden wir Kernmethoden für spezielle Probleme der Bildklassifikation. Kernmethoden generalisieren die Verwendung von inneren Produkten zu Distanzen zwischen allgemeinen Objekten. Das Problem der Bildklassifikation ist es, Bilder anhand des visuellen Inhaltes zu unterscheiden. Wir beschäftigen uns mit zwei wichtigen Aspekten, die in diesem Problem auftreten: lernen mit mehrdeutiger Annotation und die Kombination von verschiedenartigen Datenquellen. Unsere Ansätze sind nicht auf die Bildklassififikation beschränkt und für einen grösseren Problemkreis verwendbar. Mehrdeutige Annotationen sind ein inhärentes Problem der Bildklassifikation. Wir diskutieren verschiedene Instanzen und schlagen eine neue Unterteilung in mehrere Szenarien vor. Danach stellen wir Kernmethoden für dieses Problem vor und entwickeln einen Algorithmus, der diese effizient löst. Mit der Methode der Kernkombination werden Kernfunktionen anhand von Daten automatisch bestimmt. Wir generalisieren diesen Ansatz indem wir den Suchraum auf kontinuierlich parametrisierte Kernklassen ausgedehnen. Diese Methode wird in zwei verschiedenen Anwendungen eingesetzt. Wir betrachten spezifische Kerne für Bilddaten und lernen diese anhand von Beispielen. Schließlich vergleichen wir verschiedene Verfahren der Merkmalskombination und zeigen signifikante Verbesserungen im Bereich der Objekterkennung gegenüber bestehenden Methoden

    Remote maintenance of real time controller software over the internet

    Get PDF
    The aim of the work reported in this thesis is to investigate how to establish a standard platform for remote maintenance of controller software, which provides remote monitoring, remote fault identification and remote performance recovery services for geographically distributed controller software over the Internet. A Linear Quadratic Gaussian (LQG) controller is used as the benchmark for the control performance assessment; the LQG benchmark variances are estimated based on the Lyapunov equation and subspace matrices. The LQG controller is also utilized as the reference model of the actual controller to detect the controller failures. Discrepancies between control signals of the LQG and the actual controller are employed to a General Likelihood Ratio (GLR) test and the controller failure detection is characterized to detect sudden jumping points in the mean or variance of the discrepancies. To restore the degraded control performance caused by the controller failures, a compensator is designed and inserted into the post-fault control loop, which serially links with the faulty controller and recovers the degraded control performance into an acceptable range. Techniques of controller performance monitoring, controller failure detection and maintenance are extended into the Internet environment. An Internet-based maintenance system for controller software is developed, which provides remote control performance assessment and recovery services, and remote fault identification service over the Internet for the geographically distributed controller software. The integration between the mobile agent technology and the controller software maintenance is investigated. A mobile agent based controller software maintenance system is established; the mobile agent structure is designed to be flexible and the travelling agents can be remotely updated over the Internet. Also, the issue of heavy data process and transfer over the Internet is probed and a novel data process and transfer scheme is introduced. All the proposed techniques are tested on sirnulations or a process control unit. Simulation and experimental results illustrate the effectiveness of the proposed techniques.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore