1,631 research outputs found
Image Segmentation with Eigenfunctions of an Anisotropic Diffusion Operator
We propose the eigenvalue problem of an anisotropic diffusion operator for
image segmentation. The diffusion matrix is defined based on the input image.
The eigenfunctions and the projection of the input image in some eigenspace
capture key features of the input image. An important property of the model is
that for many input images, the first few eigenfunctions are close to being
piecewise constant, which makes them useful as the basis for a variety of
applications such as image segmentation and edge detection. The eigenvalue
problem is shown to be related to the algebraic eigenvalue problems resulting
from several commonly used discrete spectral clustering models. The relation
provides a better understanding and helps developing more efficient numerical
implementation and rigorous numerical analysis for discrete spectral
segmentation methods. The new continuous model is also different from
energy-minimization methods such as geodesic active contour in that no initial
guess is required for in the current model. The multi-scale feature is a
natural consequence of the anisotropic diffusion operator so there is no need
to solve the eigenvalue problem at multiple levels. A numerical implementation
based on a finite element method with an anisotropic mesh adaptation strategy
is presented. It is shown that the numerical scheme gives much more accurate
results on eigenfunctions than uniform meshes. Several interesting features of
the model are examined in numerical examples and possible applications are
discussed
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
Entropy of eigenfunctions on quantum graphs
We consider families of finite quantum graphs of increasing size and we are
interested in how eigenfunctions are distributed over the graph. As a measure
for the distribution of an eigenfunction on a graph we introduce the entropy,
it has the property that a large value of the entropy of an eigenfunction
implies that it cannot be localised on a small set on the graph. We then derive
lower bounds for the entropy of eigenfunctions which depend on the topology of
the graph and the boundary conditions at the vertices. The optimal bounds are
obtained for expanders with large girth, the bounds are similar to the ones
obtained by Anantharaman et.al. for eigenfunctions on manifolds of negative
curvature, and are based on the entropic uncertainty principle. For comparison
we compute as well the average behaviour of entropies on Neumann star graphs,
where the entropies are much smaller. Finally we compare our lower bounds with
numerical results for regular graphs and star graphs with different boundary
conditions.Comment: 28 pages, 3 figure
- …