51,651 research outputs found
Graph Topology Learning Under Privacy Constraints
We consider the problem of inferring the underlying graph topology from
smooth graph signals in a novel but practical scenario where data are located
in distributed clients and are privacy-sensitive. The main difficulty of this
task lies in how to utilize the potentially heterogeneous data of all isolated
clients under privacy constraints. Towards this end, we propose a framework
where personalized graphs for local clients as well as a consensus graph are
jointly learned. The personalized graphs match local data distributions,
thereby mitigating data heterogeneity, while the consensus graph captures the
global information. We next devise a tailored algorithm to solve the induced
problem without violating privacy constraints, i.e., all private data are
processed locally. To further enhance privacy protection, we introduce
differential privacy (DP) into the proposed algorithm to resist privacy attacks
when transmitting model updates. Theoretically, we establish provable
convergence analyses for the proposed algorithms, including that with DP.
Finally, extensive experiments on both synthetic and real-world data are
carried out to validate the proposed framework. Experimental results illustrate
that our approach can learn graphs effectively in the target scenario.Comment: 17 page
Chebyshev Polynomial Approximation for Distributed Signal Processing
Unions of graph Fourier multipliers are an important class of linear
operators for processing signals defined on graphs. We present a novel method
to efficiently distribute the application of these operators to the
high-dimensional signals collected by sensor networks. The proposed method
features approximations of the graph Fourier multipliers by shifted Chebyshev
polynomials, whose recurrence relations make them readily amenable to
distributed computation. We demonstrate how the proposed method can be used in
a distributed denoising task, and show that the communication requirements of
the method scale gracefully with the size of the network.Comment: 8 pages, 5 figures, to appear in the Proceedings of the IEEE
International Conference on Distributed Computing in Sensor Systems (DCOSS),
June, 2011, Barcelona, Spai
Graph learning under sparsity priors
Graph signals offer a very generic and natural representation for data that
lives on networks or irregular structures. The actual data structure is however
often unknown a priori but can sometimes be estimated from the knowledge of the
application domain. If this is not possible, the data structure has to be
inferred from the mere signal observations. This is exactly the problem that we
address in this paper, under the assumption that the graph signals can be
represented as a sparse linear combination of a few atoms of a structured graph
dictionary. The dictionary is constructed on polynomials of the graph
Laplacian, which can sparsely represent a general class of graph signals
composed of localized patterns on the graph. We formulate a graph learning
problem, whose solution provides an ideal fit between the signal observations
and the sparse graph signal model. As the problem is non-convex, we propose to
solve it by alternating between a signal sparse coding and a graph update step.
We provide experimental results that outline the good graph recovery
performance of our method, which generally compares favourably to other recent
network inference algorithms
Recovery Conditions and Sampling Strategies for Network Lasso
The network Lasso is a recently proposed convex optimization method for
machine learning from massive network structured datasets, i.e., big data over
networks. It is a variant of the well-known least absolute shrinkage and
selection operator (Lasso), which is underlying many methods in learning and
signal processing involving sparse models. Highly scalable implementations of
the network Lasso can be obtained by state-of-the art proximal methods, e.g.,
the alternating direction method of multipliers (ADMM). By generalizing the
concept of the compatibility condition put forward by van de Geer and Buehlmann
as a powerful tool for the analysis of plain Lasso, we derive a sufficient
condition, i.e., the network compatibility condition, on the underlying network
topology such that network Lasso accurately learns a clustered underlying graph
signal. This network compatibility condition relates the location of the
sampled nodes with the clustering structure of the network. In particular, the
NCC informs the choice of which nodes to sample, or in machine learning terms,
which data points provide most information if labeled.Comment: nominated as student paper award finalist at Asilomar 2017. arXiv
admin note: substantial text overlap with arXiv:1704.0210
- …