32 research outputs found
Slotted Aloha for Networked Base Stations
We study multiple base station, multi-access systems in which the user-base
station adjacency is induced by geographical proximity. At each slot, each user
transmits (is active) with a certain probability, independently of other users,
and is heard by all base stations within the distance . Both the users and
base stations are placed uniformly at random over the (unit) area. We first
consider a non-cooperative decoding where base stations work in isolation, but
a user is decoded as soon as one of its nearby base stations reads a clean
signal from it. We find the decoding probability and quantify the gains
introduced by multiple base stations. Specifically, the peak throughput
increases linearly with the number of base stations and is roughly
larger than the throughput of a single-base station that uses standard slotted
Aloha. Next, we propose a cooperative decoding, where the mutually close base
stations inform each other whenever they decode a user inside their coverage
overlap. At each base station, the messages received from the nearby stations
help resolve collisions by the interference cancellation mechanism. Building
from our exact formulas for the non-cooperative case, we provide a heuristic
formula for the cooperative decoding probability that reflects well the actual
performance. Finally, we demonstrate by simulation significant gains of
cooperation with respect to the non-cooperative decoding.Comment: conference; submitted on Dec 15, 201
Large deviations rates for stochastic gradient descent with strongly convex functions
Recent works have shown that high probability metrics with stochastic
gradient descent (SGD) exhibit informativeness and in some cases advantage over
the commonly adopted mean-square error-based ones. In this work we provide a
formal framework for the study of general high probability bounds with SGD,
based on the theory of large deviations. The framework allows for a generic
(not-necessarily bounded) gradient noise satisfying mild technical assumptions,
allowing for the dependence of the noise distribution on the current iterate.
Under the preceding assumptions, we find an upper large deviations bound for
SGD with strongly convex functions. The corresponding rate function captures
analytical dependence on the noise distribution and other problem parameters.
This is in contrast with conventional mean-square error analysis that captures
only the noise dependence through the variance and does not capture the effect
of higher order moments nor interplay between the noise geometry and the shape
of the cost function. We also derive exact large deviation rates for the case
when the objective function is quadratic and show that the obtained function
matches the one from the general upper bound hence showing the tightness of the
general upper bound. Numerical examples illustrate and corroborate theoretical
findings.Comment: 32 pages, 2 figure
Distributed Detection over Random Networks: Large Deviations Performance Analysis
We study the large deviations performance, i.e., the exponential decay rate
of the error probability, of distributed detection algorithms over random
networks. At each time step each sensor: 1) averages its decision variable
with the neighbors' decision variables; and 2) accounts on-the-fly for its new
observation. We show that distributed detection exhibits a "phase change"
behavior. When the rate of network information flow (the speed of averaging) is
above a threshold, then distributed detection is asymptotically equivalent to
the optimal centralized detection, i.e., the exponential decay rate of the
error probability for distributed detection equals the Chernoff information.
When the rate of information flow is below a threshold, distributed detection
achieves only a fraction of the Chernoff information rate; we quantify this
achievable rate as a function of the network rate of information flow.
Simulation examples demonstrate our theoretical findings on the behavior of
distributed detection over random networks.Comment: 30 pages, journal, submitted on December 3rd, 201
A One-shot Framework for Distributed Clustered Learning in Heterogeneous Environments
The paper proposes a family of communication efficient methods for
distributed learning in heterogeneous environments in which users obtain data
from one of different distributions. In the proposed setup, the grouping of
users (based on the data distributions they sample), as well as the underlying
statistical properties of the distributions, are apriori unknown. A family of
One-shot Distributed Clustered Learning methods (ODCL-) is
proposed, parametrized by the set of admissible clustering algorithms
, with the objective of learning the true model at each user. The
admissible clustering methods include -means (KM) and convex clustering
(CC), giving rise to various one-shot methods within the proposed family, such
as ODCL-KM and ODCL-CC. The proposed one-shot approach, based on local
computations at the users and a clustering based aggregation step at the server
is shown to provide strong learning guarantees. In particular, for strongly
convex problems it is shown that, as long as the number of data points per user
is above a threshold, the proposed approach achieves order-optimal mean-squared
error (MSE) rates in terms of the sample size. An explicit characterization of
the threshold is provided in terms of problem parameters. The trade-offs with
respect to selecting various clustering methods (ODCL-CC, ODCL-KM) are
discussed and significant improvements over state-of-the-art are demonstrated.
Numerical experiments illustrate the findings and corroborate the performance
of the proposed methods