34,180 research outputs found
An Improved Distributed Algorithm for Maximal Independent Set
The Maximal Independent Set (MIS) problem is one of the basics in the study
of locality in distributed graph algorithms. This paper presents an extremely
simple randomized algorithm providing a near-optimal local complexity for this
problem, which incidentally, when combined with some recent techniques, also
leads to a near-optimal global complexity.
Classical algorithms of Luby [STOC'85] and Alon, Babai and Itai [JALG'86]
provide the global complexity guarantee that, with high probability, all nodes
terminate after rounds. In contrast, our initial focus is on the
local complexity, and our main contribution is to provide a very simple
algorithm guaranteeing that each particular node terminates after rounds, with probability at least
. The guarantee holds even if the randomness outside -hops
neighborhood of is determined adversarially. This degree-dependency is
optimal, due to a lower bound of Kuhn, Moscibroda, and Wattenhofer [PODC'04].
Interestingly, this local complexity smoothly transitions to a global
complexity: by adding techniques of Barenboim, Elkin, Pettie, and Schneider
[FOCS'12, arXiv: 1202.1983v3], we get a randomized MIS algorithm with a high
probability global complexity of ,
where denotes the maximum degree. This improves over the result of Barenboim et al., and gets close
to the lower bound of Kuhn et al.
Corollaries include improved algorithms for MIS in graphs of upper-bounded
arboricity, or lower-bounded girth, for Ruling Sets, for MIS in the Local
Computation Algorithms (LCA) model, and a faster distributed algorithm for the
Lov\'asz Local Lemma
A Distributed Method for Trust-Aware Recommendation in Social Networks
This paper contains the details of a distributed trust-aware recommendation
system. Trust-base recommenders have received a lot of attention recently. The
main aim of trust-based recommendation is to deal the problems in traditional
Collaborative Filtering recommenders. These problems include cold start users,
vulnerability to attacks, etc.. Our proposed method is a distributed approach
and can be easily deployed on social networks or real life networks such as
sensor networks or peer to peer networks
Reflexivity revisited
We study some aspects of reflexive modules. For example, we search conditions
for which reflexive modules are free or being very close to free modules
Covariance Estimation: The GLM and Regularization Perspectives
Finding an unconstrained and statistically interpretable reparameterization
of a covariance matrix is still an open problem in statistics. Its solution is
of central importance in covariance estimation, particularly in the recent
high-dimensional data environment where enforcing the positive-definiteness
constraint could be computationally expensive. We provide a survey of the
progress made in modeling covariance matrices from two relatively complementary
perspectives: (1) generalized linear models (GLM) or parsimony and use of
covariates in low dimensions, and (2) regularization or sparsity for
high-dimensional data. An emerging, unifying and powerful trend in both
perspectives is that of reducing a covariance estimation problem to that of
estimating a sequence of regression problems. We point out several instances of
the regression-based formulation. A notable case is in sparse estimation of a
precision matrix or a Gaussian graphical model leading to the fast graphical
LASSO algorithm. Some advantages and limitations of the regression-based
Cholesky decomposition relative to the classical spectral (eigenvalue) and
variance-correlation decompositions are highlighted. The former provides an
unconstrained and statistically interpretable reparameterization, and
guarantees the positive-definiteness of the estimated covariance matrix. It
reduces the unintuitive task of covariance estimation to that of modeling a
sequence of regressions at the cost of imposing an a priori order among the
variables. Elementwise regularization of the sample covariance matrix such as
banding, tapering and thresholding has desirable asymptotic properties and the
sparse estimated covariance matrix is positive definite with probability
tending to one for large samples and dimensions.Comment: Published in at http://dx.doi.org/10.1214/11-STS358 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …