692 research outputs found
Information geometric methods for complexity
Research on the use of information geometry (IG) in modern physics has
witnessed significant advances recently. In this review article, we report on
the utilization of IG methods to define measures of complexity in both
classical and, whenever available, quantum physical settings. A paradigmatic
example of a dramatic change in complexity is given by phase transitions (PTs).
Hence we review both global and local aspects of PTs described in terms of the
scalar curvature of the parameter manifold and the components of the metric
tensor, respectively. We also report on the behavior of geodesic paths on the
parameter manifold used to gain insight into the dynamics of PTs. Going
further, we survey measures of complexity arising in the geometric framework.
In particular, we quantify complexity of networks in terms of the Riemannian
volume of the parameter space of a statistical manifold associated with a given
network. We are also concerned with complexity measures that account for the
interactions of a given number of parts of a system that cannot be described in
terms of a smaller number of parts of the system. Finally, we investigate
complexity measures of entropic motion on curved statistical manifolds that
arise from a probabilistic description of physical systems in the presence of
limited information. The Kullback-Leibler divergence, the distance to an
exponential family and volumes of curved parameter manifolds, are examples of
essential IG notions exploited in our discussion of complexity. We conclude by
discussing strengths, limits, and possible future applications of IG methods to
the physics of complexity.Comment: review article, 60 pages, no figure
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
- …