534 research outputs found

    Improved training of generative models

    Get PDF
    Cette thèse explore deux idées différentes: — Une méthode améliorée d’entraînement de réseaux de neurones récurrents. Communément, l’entraînement des réseaux de neurones récurrents se fait à l’aide d’une méthode connue sous le nom de ‘teacher forcing’. Cette méthode consiste à utiliser les valeurs de la séquence observée en tant qu’entrées du réseau pendant la phase d’entraînement, alors que l’on utilise la séquence des valeurs prédites par le modèle lors de la phase de génération. Nous présentons ici un algorithme appelé ‘professor forcing’ qui utilise l’adaptation de domaine adversaire pour encourager la dynamique du réseau récurrent à être la même lors de la phase d’entraînement et lors de la phase de génération. Ce travail a été accepté a la session de posters de la conférence NIPS 2016. — Un nouveau modèle pour l’entraînement de modèles génératifs. Un obstacle connu lors de l’entraînement de modèles graphiques non orientés avec variables latentes, tels que les machines de Boltzmann, est que la procédure d’entraînement par maximum de vraisemblance nécessite une chaîne de Markov pour échantillonner. Or le temps de mixage de la chaîne de Markov dans la boucle interne de l’entraînement peut être très long. Dans cette thèse, nous proposons d’abord l’idée qu’il suffit de découper localement la fonction d´énergie de sorte que son gradient pointe dans la bonne direction (c'est-à-dire vers la génération des données). Cela correspond à une nouvelle procédure d’apprentissage qui s’éloigne d’abord des données en suivant l’opérateur de transition du modèle, et qui ensuite entraîne cet opérateur à revenir en arrière à chaque étape, en revenant vers les données. Ce travail a été accepté en tant que poster à la conférence NIPS 2017. Dans le premier chapitre, je présente quelques notions élémentaires sur les modèles génératifs (en particulier les modèles graphiques orientés et non orientés). Je montre en quoi la méthode proposée dans le chapitre 3 est liée à ces modèles. Dans le deuxième chapitre, je décris notre méthode proposée (appelée ‘professor forcing’) pour améliorer l’entraînement des réseaux de neurones récurrents. Dans le troisième chapitre, je décris notre méthode proposée pour entraîner un modèle génératif en paramétrant directement un opérateur de transition.This thesis explores ideas along 2 different directions: — Improved Training of Recurrent Neural Networks - Recurrent Neural Networks are trained using teacher forcing which works by supplying observed sequence values as inputs during training, and using the network’s own one-step ahead predictions to do multi-step sampling. We introduce the Professor Forcing algorithm, which uses adversarial domain adaptation to encourage the dynamics of the recurrent network to be the same when training the network and when sampling from the network over multiple time steps. This work was accepted as a conference poster at NIPS 2016. — Training iterative generative models A recognized obstacle to training undirected graphical models with latent variables such as Boltzmann machines is that the maximum likelihood training procedure requires sampling from Monte-Carlo Markov chains which may not mix well, in the inner loop of training, for each example. In this thesis, we first propose the idea that it is sufficient to locally carve the energy function everywhere so that its gradient points in the right direction (i.e., towards generating the data). This corresponds to a new learning procedure that first walks away from data points by following the model transition operator and then trains that operator to walk backwards for each of these steps, back towards the training example. This work was accepted as a conference poster at NIPS 2017. Chapter One is dedicated to background knowledge about generative models. This covers directed and undirectored graphical models and how the proposed method in Chapter 3 are related to these. In the following chapter, I will describe our proposed method to improve training of recurrent neural networks using Professor Forcing Goyal et al. [2016]. The third chapter describes the Variational Walkback [Goyal et al., 2017a] algorithm. This is an algorithm for training an iterative generative model by directly learns a parameterized transition operator

    Multiple-Aspect Analysis of Semantic Trajectories

    Get PDF
    This open access book constitutes the refereed post-conference proceedings of the First International Workshop on Multiple-Aspect Analysis of Semantic Trajectories, MASTER 2019, held in conjunction with the 19th European Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2019, in Würzburg, Germany, in September 2019. The 8 full papers presented were carefully reviewed and selected from 12 submissions. They represent an interesting mix of techniques to solve recurrent as well as new problems in the semantic trajectory domain, such as data representation models, data management systems, machine learning approaches for anomaly detection, and common pathways identification

    End-to-end anomaly detection in stream data

    Get PDF
    Nowadays, huge volumes of data are generated with increasing velocity through various systems, applications, and activities. This increases the demand for stream and time series analysis to react to changing conditions in real-time for enhanced efficiency and quality of service delivery as well as upgraded safety and security in private and public sectors. Despite its very rich history, time series anomaly detection is still one of the vital topics in machine learning research and is receiving increasing attention. Identifying hidden patterns and selecting an appropriate model that fits the observed data well and also carries over to unobserved data is not a trivial task. Due to the increasing diversity of data sources and associated stochastic processes, this pivotal data analysis topic is loaded with various challenges like complex latent patterns, concept drift, and overfitting that may mislead the model and cause a high false alarm rate. Handling these challenges leads the advanced anomaly detection methods to develop sophisticated decision logic, which turns them into mysterious and inexplicable black-boxes. Contrary to this trend, end-users expect transparency and verifiability to trust a model and the outcomes it produces. Also, pointing the users to the most anomalous/malicious areas of time series and causal features could save them time, energy, and money. For the mentioned reasons, this thesis is addressing the crucial challenges in an end-to-end pipeline of stream-based anomaly detection through the three essential phases of behavior prediction, inference, and interpretation. The first step is focused on devising a time series model that leads to high average accuracy as well as small error deviation. On this basis, we propose higher-quality anomaly detection and scoring techniques that utilize the related contexts to reclassify the observations and post-pruning the unjustified events. Last but not least, we make the predictive process transparent and verifiable by providing meaningful reasoning behind its generated results based on the understandable concepts by a human. The provided insight can pinpoint the anomalous regions of time series and explain why the current status of a system has been flagged as anomalous. Stream-based anomaly detection research is a principal area of innovation to support our economy, security, and even the safety and health of societies worldwide. We believe our proposed analysis techniques can contribute to building a situational awareness platform and open new perspectives in a variety of domains like cybersecurity, and health

    Self-Adapting Noise-Contrastive Estimation for Energy-Based Models

    Full text link
    Training energy-based models (EBMs) with noise-contrastive estimation (NCE) is theoretically feasible but practically challenging. Effective learning requires the noise distribution to be approximately similar to the target distribution, especially in high-dimensional domains. Previous works have explored modelling the noise distribution as a separate generative model, and then concurrently training this noise model with the EBM. While this method allows for more effective noise-contrastive estimation, it comes at the cost of extra memory and training complexity. Instead, this thesis proposes a self-adapting NCE algorithm which uses static instances of the EBM along its training trajectory as the noise distribution. During training, these static instances progressively converge to the target distribution, thereby circumventing the need to simultaneously train an auxiliary noise model. Moreover, we express this self-adapting NCE algorithm in the framework of Bregman divergences and show that it is a generalization of maximum likelihood learning for EBMs. The performance of our algorithm is evaluated across a range of noise update intervals, and experimental results show that shorter update intervals are conducive to higher synthesis quality.Comment: MSc thesis submitted to Tsinghua University in July 202

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav
    • …
    corecore