9,641 research outputs found
Local Tomography of Large Networks under the Low-Observability Regime
This article studies the problem of reconstructing the topology of a network
of interacting agents via observations of the state-evolution of the agents. We
focus on the large-scale network setting with the additional constraint of
observations, where only a small fraction of the agents can be
feasibly observed. The goal is to infer the underlying subnetwork of
interactions and we refer to this problem as . In order to
study the large-scale setting, we adopt a proper stochastic formulation where
the unobserved part of the network is modeled as an Erd\"{o}s-R\'enyi random
graph, while the observable subnetwork is left arbitrary. The main result of
this work is establishing that, under this setting, local tomography is
actually possible with high probability, provided that certain conditions on
the network model are met (such as stability and symmetry of the network
combination matrix). Remarkably, such conclusion is established under the
- , where the cardinality of the observable
subnetwork is fixed, while the size of the overall network scales to infinity.Comment: To appear in IEEE Transactions on Information Theor
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
On Collaboration in Distributed Parameter Estimation with Resource Constraints
We study sensor/agent data collection and collaboration policies for
parameter estimation, accounting for resource constraints and correlation
between observations collected by distinct sensors/agents. Specifically, we
consider a group of sensors/agents each samples from different variables of a
multivariate Gaussian distribution and has different estimation objectives, and
we formulate a sensor/agent's data collection and collaboration policy design
problem as a Fisher information maximization (or Cramer-Rao bound minimization)
problem. When the knowledge of correlation between variables is available, we
analytically identify two particular scenarios: (1) where the knowledge of the
correlation between samples cannot be leveraged for collaborative estimation
purposes and (2) where the optimal data collection policy involves investing
scarce resources to collaboratively sample and transfer information that is not
of immediate interest and whose statistics are already known, with the sole
goal of increasing the confidence on the estimate of the parameter of interest.
When the knowledge of certain correlation is unavailable but collaboration may
still be worthwhile, we propose novel ways to apply multi-armed bandit
algorithms to learn the optimal data collection and collaboration policy in our
distributed parameter estimation problem and demonstrate that the proposed
algorithms, DOUBLE-F, DOUBLE-Z, UCB-F, UCB-Z, are effective through
simulations
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
- …