50,746 research outputs found
Entropy of eigenfunctions on quantum graphs
We consider families of finite quantum graphs of increasing size and we are
interested in how eigenfunctions are distributed over the graph. As a measure
for the distribution of an eigenfunction on a graph we introduce the entropy,
it has the property that a large value of the entropy of an eigenfunction
implies that it cannot be localised on a small set on the graph. We then derive
lower bounds for the entropy of eigenfunctions which depend on the topology of
the graph and the boundary conditions at the vertices. The optimal bounds are
obtained for expanders with large girth, the bounds are similar to the ones
obtained by Anantharaman et.al. for eigenfunctions on manifolds of negative
curvature, and are based on the entropic uncertainty principle. For comparison
we compute as well the average behaviour of entropies on Neumann star graphs,
where the entropies are much smaller. Finally we compare our lower bounds with
numerical results for regular graphs and star graphs with different boundary
conditions.Comment: 28 pages, 3 figure
Sparse random graphs: regularization and concentration of the Laplacian
We study random graphs with possibly different edge probabilities in the
challenging sparse regime of bounded expected degrees. Unlike in the dense
case, neither the graph adjacency matrix nor its Laplacian concentrate around
their expectations due to the highly irregular distribution of node degrees. It
has been empirically observed that simply adding a constant of order to
each entry of the adjacency matrix substantially improves the behavior of
Laplacian. Here we prove that this regularization indeed forces Laplacian to
concentrate even in sparse graphs. As an immediate consequence in network
analysis, we establish the validity of one of the simplest and fastest
approaches to community detection -- regularized spectral clustering, under the
stochastic block model. Our proof of concentration of regularized Laplacian is
based on Grothendieck's inequality and factorization, combined with paving
arguments.Comment: Added reference
Propagation Kernels
We introduce propagation kernels, a general graph-kernel framework for
efficiently measuring the similarity of structured data. Propagation kernels
are based on monitoring how information spreads through a set of given graphs.
They leverage early-stage distributions from propagation schemes such as random
walks to capture structural information encoded in node labels, attributes, and
edge information. This has two benefits. First, off-the-shelf propagation
schemes can be used to naturally construct kernels for many graph types,
including labeled, partially labeled, unlabeled, directed, and attributed
graphs. Second, by leveraging existing efficient and informative propagation
schemes, propagation kernels can be considerably faster than state-of-the-art
approaches without sacrificing predictive performance. We will also show that
if the graphs at hand have a regular structure, for instance when modeling
image or video data, one can exploit this regularity to scale the kernel
computation to large databases of graphs with thousands of nodes. We support
our contributions by exhaustive experiments on a number of real-world graphs
from a variety of application domains
No-gaps delocalization for general random matrices
We prove that with high probability, every eigenvector of a random matrix is
delocalized in the sense that any subset of its coordinates carries a
non-negligible portion of its norm. Our results pertain to a wide
class of random matrices, including matrices with independent entries,
symmetric and skew-symmetric matrices, as well as some other naturally arising
ensembles. The matrices can be real and complex; in the latter case we assume
that the real and imaginary parts of the entries are independent.Comment: 45 page
Partitioning Graph Drawings and Triangulated Simple Polygons into Greedily Routable Regions
A greedily routable region (GRR) is a closed subset of , in
which each destination point can be reached from each starting point by
choosing the direction with maximum reduction of the distance to the
destination in each point of the path.
Recently, Tan and Kermarrec proposed a geographic routing protocol for dense
wireless sensor networks based on decomposing the network area into a small
number of interior-disjoint GRRs. They showed that minimum decomposition is
NP-hard for polygons with holes.
We consider minimum GRR decomposition for plane straight-line drawings of
graphs. Here, GRRs coincide with self-approaching drawings of trees, a drawing
style which has become a popular research topic in graph drawing. We show that
minimum decomposition is still NP-hard for graphs with cycles, but can be
solved optimally for trees in polynomial time. Additionally, we give a
2-approximation for simple polygons, if a given triangulation has to be
respected.Comment: full version of a paper appearing in ISAAC 201
Four lectures on probabilistic methods for data science
Methods of high-dimensional probability play a central role in applications
for statistics, signal processing theoretical computer science and related
fields. These lectures present a sample of particularly useful tools of
high-dimensional probability, focusing on the classical and matrix Bernstein's
inequality and the uniform matrix deviation inequality. We illustrate these
tools with applications for dimension reduction, network analysis, covariance
estimation, matrix completion and sparse signal recovery. The lectures are
geared towards beginning graduate students who have taken a rigorous course in
probability but may not have any experience in data science applications.Comment: Lectures given at 2016 PCMI Graduate Summer School in Mathematics of
Data. Some typos, inaccuracies fixe
- …