317 research outputs found

    Estimating the probability of a fleet vehicle accident : a deep learning approach using conditional variational auto-encoders

    Full text link
    Le risque est la possibilité d'un résultat négatif ou indésirable. Dans nos travaux, nous évaluons le risque d'accident d'un véhicule de flotte à partir des données de 1998 et 1999 fournies par la Société d'assurance automobiles du Québec (SAAQ), où chaque observation correspond à un camion transporteur de marchandises, et pour lequel le nombre d'accidents qu'il a eues l'année suivante est connue. Pour chaque véhicule, nous avons des informations telles que le nombre et le type d'infractions qu'il a eues, ainsi que certaines de ses caractéristiques comme la taille ou le nombre de cylindres. Avec notre objectif à l'esprit, nous proposons une nouvelle approche utilisant des auto-encodeurs variationnels conditionnels (CVAE) en considérant deux hypothèses de distribution, Binomiale Négative et Poisson, pour modéliser la distribution d'un accident de véhicule de flotte. Notre motivation principale pour l'utilisation d'un CVAE est de capturer la distribution conjointe entre le nombre d'accidents d'un véhicule de flotte et les variables prédictives de tels accidents, et d'extraire des caractéristiques latentes qui aident à reconstruire la distribution du nombre d'accidents de véhicules de flotte. Nous comparons ainsi la CVAE avec d'autres méthodes probabilistes, comme un modèle MLP qui apprend la distribution du nombre d'accidents de véhicules de flotte sans extraire de représentations latentes significatives. Nous avons constaté que le CVAE surpasse légèrement le modèle MLP, ce qui suggère qu'un modèle capable d'apprendre des caractéristiques latentes a une valeur ajoutée par rapport à un autre qui ne le fait pas. Nous avons également comparé le CVAE avec un autre modèle probabiliste de base, le modèle linéaire généralisé (GLM), ainsi qu'avec des modèles de classification. Nous avons constaté que le CVAE et le GLM utilisant la distribution binomiale négative ont tendance à montrer de meilleurs résultats. De plus, nous développons de nouvelles variables prédictives qui intègrent des caractéristiques liées à l'ensemble de la flotte en plus des caractéristiques individuelles pour chaque véhicule. L'utilisation de ces nouvelles variables prédictives se traduit par une amélioration des performances de tous les modèles mis en œuvre dans nos travaux utilisés pour évaluer la probabilité d'un accident de véhicule de flotte.Risk is the possibility of a negative or undesired outcome. In our work, we evaluate the risk of a fleet vehicle accident using the 1998 and 1999 records from the files of the Societe d'assurance automobiles du Quebec (SAAQ), where each observation in the data set corresponds to a truck carrier of merchandise, and where the number of accidents during the following year it had. For each vehicle, we have useful information such as the number and type of violations it had, as well as some of its characteristics like the number of axles or the number of cylinders. With our objective in mind, we propose a new approach using conditional variational auto-encoders (CVAE) considering two distributional assumptions, Negative Binomial and Poisson, to model the distribution of a fleet vehicle accident. Our main motivation for using a CVAE is to capture the joint distribution between the number of accidents of a fleet vehicle and the predictor variables of such accidents, and to extract latent features that help reconstruct the distribution of the number of fleet vehicle accidents. We compare the CVAE with other probabilistic methods, such as a simple MLP model that learns the distribution of the number of fleet vehicle accidents without extracting meaningful latent representations. We found that the CVAE marginally outperforms the MLP model, which suggests that a model able to learn latent features has added value over one that does not. We also compared the CVAE with another basic probabilistic model, the generalized linear model (GLM), as well as with classification models. We found that the CVAE and GLM using the Negative Binomial distribution tend to show better results. Moreover, we provide a feature engineering scheme that incorporates features related to the whole fleet in addition to individual features for each vehicle that translates into improved performances of all the models implemented in our work used to evaluate the probability of a fleet vehicle accident

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Towards Autoencoding Variational Inference for Aspect-based Opinion Summary

    Full text link
    Aspect-based Opinion Summary (AOS), consisting of aspect discovery and sentiment classification steps, has recently been emerging as one of the most crucial data mining tasks in e-commerce systems. Along this direction, the LDA-based model is considered as a notably suitable approach, since this model offers both topic modeling and sentiment classification. However, unlike traditional topic modeling, in the context of aspect discovery it is often required some initial seed words, whose prior knowledge is not easy to be incorporated into LDA models. Moreover, LDA approaches rely on sampling methods, which need to load the whole corpus into memory, making them hardly scalable. In this research, we study an alternative approach for AOS problem, based on Autoencoding Variational Inference (AVI). Firstly, we introduce the Autoencoding Variational Inference for Aspect Discovery (AVIAD) model, which extends the previous work of Autoencoding Variational Inference for Topic Models (AVITM) to embed prior knowledge of seed words. This work includes enhancement of the previous AVI architecture and also modification of the loss function. Ultimately, we present the Autoencoding Variational Inference for Joint Sentiment/Topic (AVIJST) model. In this model, we substantially extend the AVI model to support the JST model, which performs topic modeling for corresponding sentiment. The experimental results show that our proposed models enjoy higher topic coherent, faster convergence time and better accuracy on sentiment classification, as compared to their LDA-based counterparts.Comment: 20 pages, 11 figure

    Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network

    Full text link
    Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200 - 500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging

    A Comprehensive Overview and Comparative Analysis on Deep Learning Models: CNN, RNN, LSTM, GRU

    Full text link
    Deep learning (DL) has emerged as a powerful subset of machine learning (ML) and artificial intelligence (AI), outperforming traditional ML methods, especially in handling unstructured and large datasets. Its impact spans across various domains, including speech recognition, healthcare, autonomous vehicles, cybersecurity, predictive analytics, and more. However, the complexity and dynamic nature of real-world problems present challenges in designing effective deep learning models. Consequently, several deep learning models have been developed to address different problems and applications. In this article, we conduct a comprehensive survey of various deep learning models, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Models, Deep Reinforcement Learning (DRL), and Deep Transfer Learning. We examine the structure, applications, benefits, and limitations of each model. Furthermore, we perform an analysis using three publicly available datasets: IMDB, ARAS, and Fruit-360. We compare the performance of six renowned deep learning models: CNN, Simple RNN, Long Short-Term Memory (LSTM), Bidirectional LSTM, Gated Recurrent Unit (GRU), and Bidirectional GRU.Comment: 16 pages, 29 figure
    corecore