117,516 research outputs found
Adaptive Conditional Quantile Neural Processes
Neural processes are a family of probabilistic models that inherit the
flexibility of neural networks to parameterize stochastic processes. Despite
providing well-calibrated predictions, especially in regression problems, and
quick adaptation to new tasks, the Gaussian assumption that is commonly used to
represent the predictive likelihood fails to capture more complicated
distributions such as multimodal ones. To overcome this limitation, we propose
Conditional Quantile Neural Processes (CQNPs), a new member of the neural
processes family, which exploits the attractive properties of quantile
regression in modeling the distributions irrespective of their form. By
introducing an extension of quantile regression where the model learns to focus
on estimating informative quantiles, we show that the sampling efficiency and
prediction accuracy can be further enhanced. Our experiments with real and
synthetic datasets demonstrate substantial improvements in predictive
performance compared to the baselines, and better modeling of heterogeneous
distributions' characteristics such as multimodality
Autoencoding Conditional Neural Processes for Representation Learning
Conditional neural processes (CNPs) are a flexible and efficient family of
models that learn to learn a stochastic process from observations. In the
visual domain, they have seen particular application in contextual image
completion - observing pixel values at some locations to predict a distribution
over values at other unobserved locations. However, the choice of pixels in
learning such a CNP is typically either random or derived from a simple
statistical measure (e.g. pixel variance). Here, we turn the problem on its
head and ask: which pixels would a CNP like to observe? That is, which pixels
allow fitting CNP, and do such pixels tell us something about the underlying
image? Viewing the context provided to the CNP as fixed-size latent
representations, we construct an amortised variational framework, Partial Pixel
Space Variational Autoencoder (PPS-VAE), for predicting this context
simultaneously with learning a CNP. We evaluate PPS-VAE on a set of vision
datasets, and find that not only is it possible to learn context points while
also fitting CNPs, but that their spatial arrangement and values provides
strong signal for the information contained in the image - evaluated through
the lens of classification. We believe the PPS-VAE provides a promising avenue
to explore learning interpretable and effective visual representations
Learning Social Navigation from Demonstrations with Conditional Neural Processes
Sociability is essential for modern robots to increase their acceptability in
human environments. Traditional techniques use manually engineered utility
functions inspired by observing pedestrian behaviors to achieve social
navigation. However, social aspects of navigation are diverse, changing across
different types of environments, societies, and population densities, making it
unrealistic to use hand-crafted techniques in each domain. This paper presents
a data-driven navigation architecture that uses state-of-the-art neural
architectures, namely Conditional Neural Processes, to learn global and local
controllers of the mobile robot from observations. Additionally, we leverage a
state-of-the-art, deep prediction mechanism to detect situations not similar to
the trained ones, where reactive controllers step in to ensure safe navigation.
Our results demonstrate that the proposed framework can successfully carry out
navigation tasks regarding social norms in the data. Further, we showed that
our system produces fewer personal-zone violations, causing less discomfort
- …