3 research outputs found
Learning Extremal Representations with Deep Archetypal Analysis
Archetypes are typical population representatives in an extremal sense, where
typicality is understood as the most extreme manifestation of a trait or
feature. In linear feature space, archetypes approximate the data convex hull
allowing all data points to be expressed as convex mixtures of archetypes.
However, it might not always be possible to identify meaningful archetypes in a
given feature space. Learning an appropriate feature space and identifying
suitable archetypes simultaneously addresses this problem. This paper
introduces a generative formulation of the linear archetype model,
parameterized by neural networks. By introducing the distance-dependent
archetype loss, the linear archetype model can be integrated into the latent
space of a variational autoencoder, and an optimal representation with respect
to the unknown archetypes can be learned end-to-end. The reformulation of
linear Archetypal Analysis as deep variational information bottleneck, allows
the incorporation of arbitrarily complex side information during training.
Furthermore, an alternative prior, based on a modified Dirichlet distribution,
is proposed. The real-world applicability of the proposed method is
demonstrated by exploring archetypes of female facial expressions while using
multi-rater based emotion scores of these expressions as side information. A
second application illustrates the exploration of the chemical space of small
organic molecules. In this experiment, it is demonstrated that exchanging the
side information but keeping the same set of molecules, e. g. using as side
information the heat capacity of each molecule instead of the band gap energy,
will result in the identification of different archetypes. As an application,
these learned representations of chemical space might reveal distinct starting
points for de novo molecular design.Comment: Under review for publication at the International Journal of Computer
Vision (IJCV). Extended version of our GCPR2019 paper "Deep Archetypal
Analysis
Learning Extremal Representations with Deep Archetypal Analysis
Archetypes represent extreme manifestations of a population with respect to specific characteristic traits or features. In linear feature space, archetypes approximate the data convex hull allowing all data points to be expressed as convex mixtures of archetypes. As mixing of archetypes is performed directly on the input data, linear Archetypal Analysis requires additivity of the input, which is a strong assumption unlikely to hold e.g. in case of image data. To address this problem, we propose learning an appropriate latent feature space while simultaneously identifying suitable archetypes. We thus introduce a generative formulation of the linear archetype model, parameterized by neural networks. By introducing the distance-dependent archetype loss, the linear archetype model can be integrated into the latent space of a deep variational information bottleneck and an optimal representation, together with the archetypes, can be learned end-to-end. Moreover, the information bottleneck framework allows for a natural incorporation of arbitrarily complex side information during training. As a consequence, learned archetypes become easily interpretable as they derive their meaning directly from the included side information. Applicability of the proposed method is demonstrated by exploring archetypes of female facial expressions while using multi-rater based emotion scores of these expressions as side information. A second application illustrates the exploration of the chemical space of small organic molecules. By using different kinds of side information we demonstrate how identified archetypes, along with their interpretation, largely depend on the side information provided
Truly Mesh-free Physics-Informed Neural Networks
Physics-informed Neural Networks (PINNs) have recently emerged as a
principled way to include prior physical knowledge in form of partial
differential equations (PDEs) into neural networks. Although generally viewed
as being mesh-free, current approaches still rely on collocation points
obtained within a bounded region, even in settings with spatially sparse
signals. Furthermore, if the boundaries are not known, the selection of such a
region may be arbitrary, resulting in a large proportion of collocation points
being selected in areas of low relevance. To resolve this, we present a
mesh-free and adaptive approach termed particle-density PINN (pdPINN), which is
inspired by the microscopic viewpoint of fluid dynamics. Instead of sampling
from a bounded region, we propose to sample directly from the distribution over
the (fluids) particle positions, eliminating the need to introduce boundaries
while adaptively focusing on the most relevant regions. This is achieved by
reformulating the modeled fluid density as an unnormalized probability
distribution from which we sample with dynamic Monte Carlo methods. We further
generalize pdPINNs to different settings that allow interpreting a positive
scalar quantity as a particle density, such as the evolution of the temperature
in the heat equation. The utility of our approach is demonstrated on
experiments for modeling (non-steady) compressible fluids in up to three
dimensions and a two-dimensional diffusion problem, illustrating the high
flexibility and sample efficiency compared to existing refinement methods for
PINNs.Comment: Preprin