185,022 research outputs found
Deep Generative Modeling of LiDAR Data
Building models capable of generating structured output is a key challenge
for AI and robotics. While generative models have been explored on many types
of data, little work has been done on synthesizing lidar scans, which play a
key role in robot mapping and localization. In this work, we show that one can
adapt deep generative models for this task by unravelling lidar scans into a 2D
point map. Our approach can generate high quality samples, while simultaneously
learning a meaningful latent representation of the data. We demonstrate
significant improvements against state-of-the-art point cloud generation
methods. Furthermore, we propose a novel data representation that augments the
2D signal with absolute positional information. We show that this helps
robustness to noisy and imputed input; the learned model can recover the
underlying lidar scan from seemingly uninformative dataComment: Presented at IROS 201
Adversarial Learned Molecular Graph Inference and Generation
Recent methods for generating novel molecules use graph representations of
molecules and employ various forms of graph convolutional neural networks for
inference. However, training requires solving an expensive graph isomorphism
problem, which previous approaches do not address or solve only approximately.
In this work, we propose ALMGIG, a likelihood-free adversarial learning
framework for inference and de novo molecule generation that avoids explicitly
computing a reconstruction loss. Our approach extends generative adversarial
networks by including an adversarial cycle-consistency loss to implicitly
enforce the reconstruction property. To capture properties unique to molecules,
such as valence, we extend the Graph Isomorphism Network to multi-graphs. To
quantify the performance of models, we propose to compute the distance between
distributions of physicochemical properties with the 1-Wasserstein distance. We
demonstrate that ALMGIG more accurately learns the distribution over the space
of molecules than all baselines. Moreover, it can be utilized for drug
discovery by efficiently searching the space of molecules using molecules'
continuous latent representation. Our code is available at
https://github.com/ai-med/almgigComment: Accepted at The European Conference on Machine Learning and
Principles and Practice of Knowledge Discovery in Databases (ECML PKDD); Code
at https://github.com/ai-med/almgi
Differentially Private Mixture of Generative Neural Networks
Generative models are used in a wide range of applications building on large
amounts of contextually rich information. Due to possible privacy violations of
the individuals whose data is used to train these models, however, publishing
or sharing generative models is not always viable. In this paper, we present a
novel technique for privately releasing generative models and entire
high-dimensional datasets produced by these models. We model the generator
distribution of the training data with a mixture of generative neural
networks. These are trained together and collectively learn the generator
distribution of a dataset. Data is divided into clusters, using a novel
differentially private kernel -means, then each cluster is given to separate
generative neural networks, such as Restricted Boltzmann Machines or
Variational Autoencoders, which are trained only on their own cluster using
differentially private gradient descent. We evaluate our approach using the
MNIST dataset, as well as call detail records and transit datasets, showing
that it produces realistic synthetic samples, which can also be used to
accurately compute arbitrary number of counting queries.Comment: A shorter version of this paper appeared at the 17th IEEE
International Conference on Data Mining (ICDM 2017). This is the full
version, published in IEEE Transactions on Knowledge and Data Engineering
(TKDE
- …
