47,576 research outputs found
Distributed Averaging via Lifted Markov Chains
Motivated by applications of distributed linear estimation, distributed
control and distributed optimization, we consider the question of designing
linear iterative algorithms for computing the average of numbers in a network.
Specifically, our interest is in designing such an algorithm with the fastest
rate of convergence given the topological constraints of the network. As the
main result of this paper, we design an algorithm with the fastest possible
rate of convergence using a non-reversible Markov chain on the given network
graph. We construct such a Markov chain by transforming the standard Markov
chain, which is obtained using the Metropolis-Hastings method. We call this
novel transformation pseudo-lifting. We apply our method to graphs with
geometry, or graphs with doubling dimension. Specifically, the convergence time
of our algorithm (equivalently, the mixing time of our Markov chain) is
proportional to the diameter of the network graph and hence optimal. As a
byproduct, our result provides the fastest mixing Markov chain given the
network topological constraints, and should naturally find their applications
in the context of distributed optimization, estimation and control
Fractal dimensions of the sagittal (interparietal) sutures in humans
Traditional studies of the cranial suture morphology have focused mostly on visual estimation and linear measurements, and thus on evaluating their complexity. This paper presents a new look on cranial sutures as curves, which can be analysed by fractal dimension. This new measure seems to be a much better method of expressing properties of sutural patterns than traditional methods.
Our findings suggest that the fractal dimension of non-complicated interparietal sutures slightly exceeds the topological dimension of the line, that is 1.0, whereas the fractal dimension of complicated sutures may reach a value of 1.4 or even more. The difference between the minimum and maximum decimal fraction of the fractal dimension indicates a three-fold increase in complexity in the investigated sutures
Persistence Flamelets: multiscale Persistent Homology for kernel density exploration
In recent years there has been noticeable interest in the study of the "shape
of data". Among the many ways a "shape" could be defined, topology is the most
general one, as it describes an object in terms of its connectivity structure:
connected components (topological features of dimension 0), cycles (features of
dimension 1) and so on. There is a growing number of techniques, generally
denoted as Topological Data Analysis, aimed at estimating topological
invariants of a fixed object; when we allow this object to change, however,
little has been done to investigate the evolution in its topology. In this work
we define the Persistence Flamelets, a multiscale version of one of the most
popular tool in TDA, the Persistence Landscape. We examine its theoretical
properties and we show how it could be used to gain insights on KDEs bandwidth
parameter
Dimension Detection with Local Homology
Detecting the dimension of a hidden manifold from a point sample has become
an important problem in the current data-driven era. Indeed, estimating the
shape dimension is often the first step in studying the processes or phenomena
associated to the data. Among the many dimension detection algorithms proposed
in various fields, a few can provide theoretical guarantee on the correctness
of the estimated dimension. However, the correctness usually requires certain
regularity of the input: the input points are either uniformly randomly sampled
in a statistical setting, or they form the so-called
-sample which can be neither too dense nor too sparse.
Here, we propose a purely topological technique to detect dimensions. Our
algorithm is provably correct and works under a more relaxed sampling
condition: we do not require uniformity, and we also allow Hausdorff noise. Our
approach detects dimension by determining local homology. The computation of
this topological structure is much less sensitive to the local distribution of
points, which leads to the relaxation of the sampling conditions. Furthermore,
by leveraging various developments in computational topology, we show that this
local homology at a point can be computed \emph{exactly} for manifolds
using Vietoris-Rips complexes whose vertices are confined within a local
neighborhood of . We implement our algorithm and demonstrate the accuracy
and robustness of our method using both synthetic and real data sets
Interpretable statistics for complex modelling: quantile and topological learning
As the complexity of our data increased exponentially in the last decades, so has our
need for interpretable features. This thesis revolves around two paradigms to approach
this quest for insights.
In the first part we focus on parametric models, where the problem of interpretability
can be seen as a “parametrization selection”. We introduce a quantile-centric
parametrization and we show the advantages of our proposal in the context of regression,
where it allows to bridge the gap between classical generalized linear (mixed)
models and increasingly popular quantile methods.
The second part of the thesis, concerned with topological learning, tackles the
problem from a non-parametric perspective. As topology can be thought of as a way
of characterizing data in terms of their connectivity structure, it allows to represent
complex and possibly high dimensional through few features, such as the number of
connected components, loops and voids. We illustrate how the emerging branch of
statistics devoted to recovering topological structures in the data, Topological Data
Analysis, can be exploited both for exploratory and inferential purposes with a special
emphasis on kernels that preserve the topological information in the data.
Finally, we show with an application how these two approaches can borrow strength
from one another in the identification and description of brain activity through fMRI
data from the ABIDE project
- …