364 research outputs found
Locality of not-so-weak coloring
Many graph problems are locally checkable: a solution is globally feasible if
it looks valid in all constant-radius neighborhoods. This idea is formalized in
the concept of locally checkable labelings (LCLs), introduced by Naor and
Stockmeyer (1995). Recently, Chang et al. (2016) showed that in bounded-degree
graphs, every LCL problem belongs to one of the following classes:
- "Easy": solvable in rounds with both deterministic and
randomized distributed algorithms.
- "Hard": requires at least rounds with deterministic and
rounds with randomized distributed algorithms.
Hence for any parameterized LCL problem, when we move from local problems
towards global problems, there is some point at which complexity suddenly jumps
from easy to hard. For example, for vertex coloring in -regular graphs it is
now known that this jump is at precisely colors: coloring with colors
is easy, while coloring with colors is hard.
However, it is currently poorly understood where this jump takes place when
one looks at defective colorings. To study this question, we define -partial
-coloring as follows: nodes are labeled with numbers between and ,
and every node is incident to at least properly colored edges.
It is known that -partial -coloring (a.k.a. weak -coloring) is easy
for any . As our main result, we show that -partial -coloring
becomes hard as soon as , no matter how large a we have.
We also show that this is fundamentally different from -partial
-coloring: no matter which we choose, the problem is always hard
for but it becomes easy when . The same was known previously
for partial -coloring with , but the case of was open
How Long It Takes for an Ordinary Node with an Ordinary ID to Output?
In the context of distributed synchronous computing, processors perform in
rounds, and the time-complexity of a distributed algorithm is classically
defined as the number of rounds before all computing nodes have output. Hence,
this complexity measure captures the running time of the slowest node(s). In
this paper, we are interested in the running time of the ordinary nodes, to be
compared with the running time of the slowest nodes. The node-averaged
time-complexity of a distributed algorithm on a given instance is defined as
the average, taken over every node of the instance, of the number of rounds
before that node output. We compare the node-averaged time-complexity with the
classical one in the standard LOCAL model for distributed network computing. We
show that there can be an exponential gap between the node-averaged
time-complexity and the classical time-complexity, as witnessed by, e.g.,
leader election. Our first main result is a positive one, stating that, in
fact, the two time-complexities behave the same for a large class of problems
on very sparse graphs. In particular, we show that, for LCL problems on cycles,
the node-averaged time complexity is of the same order of magnitude as the
slowest node time-complexity.
In addition, in the LOCAL model, the time-complexity is computed as a worst
case over all possible identity assignments to the nodes of the network. In
this paper, we also investigate the ID-averaged time-complexity, when the
number of rounds is averaged over all possible identity assignments. Our second
main result is that the ID-averaged time-complexity is essentially the same as
the expected time-complexity of randomized algorithms (where the expectation is
taken over all possible random bits used by the nodes, and the number of rounds
is measured for the worst-case identity assignment).
Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio
Random geometric complexes
We study the expected topological properties of Cech and Vietoris-Rips
complexes built on i.i.d. random points in R^d. We find higher dimensional
analogues of known results for connectivity and component counts for random
geometric graphs. However, higher homology H_k is not monotone when k > 0. In
particular for every k > 0 we exhibit two thresholds, one where homology passes
from vanishing to nonvanishing, and another where it passes back to vanishing.
We give asymptotic formulas for the expectation of the Betti numbers in the
sparser regimes, and bounds in the denser regimes. The main technical
contribution of the article is in the application of discrete Morse theory in
geometric probability.Comment: 26 pages, 3 figures, final revisions, to appear in Discrete &
Computational Geometr
Fast Distributed Approximation for Max-Cut
Finding a maximum cut is a fundamental task in many computational settings.
Surprisingly, it has been insufficiently studied in the classic distributed
settings, where vertices communicate by synchronously sending messages to their
neighbors according to the underlying graph, known as the or
models. We amend this by obtaining almost optimal
algorithms for Max-Cut on a wide class of graphs in these models. In
particular, for any , we develop randomized approximation
algorithms achieving a ratio of to the optimum for Max-Cut on
bipartite graphs in the model, and on general graphs in the
model.
We further present efficient deterministic algorithms, including a
-approximation for Max-Dicut in our models, thus improving the best known
(randomized) ratio of . Our algorithms make non-trivial use of the greedy
approach of Buchbinder et al. (SIAM Journal on Computing, 2015) for maximizing
an unconstrained (non-monotone) submodular function, which may be of
independent interest
Population dynamics of rhesus macaques and associated foamy virus in Bangladesh.
Foamy viruses are complex retroviruses that have been shown to be transmitted from nonhuman primates to humans. In Bangladesh, infection with simian foamy virus (SFV) is ubiquitous among rhesus macaques, which come into contact with humans in diverse locations and contexts throughout the country. We analyzed microsatellite DNA from 126 macaques at six sites in Bangladesh in order to characterize geographic patterns of macaque population structure. We also included in this study 38 macaques owned by nomadic people who train them to perform for audiences. PCR was used to analyze a portion of the proviral gag gene from all SFV-positive macaques, and multiple clones were sequenced. Phylogenetic analysis was used to infer long-term patterns of viral transmission. Analyses of SFV gag gene sequences indicated that macaque populations from different areas harbor genetically distinct strains of SFV, suggesting that geographic features such as forest cover play a role in determining the dispersal of macaques and SFV. We also found evidence suggesting that humans traveling the region with performing macaques likely play a role in the translocation of macaques and SFV. Our studies found that individual animals can harbor more than one strain of SFV and that presence of more than one SFV strain is more common among older animals. Some macaques are infected with SFV that appears to be recombinant. These findings paint a more detailed picture of how geographic and sociocultural factors influence the spectrum of simian-borne retroviruses
Efficient quantum algorithms for simulating sparse Hamiltonians
We present an efficient quantum algorithm for simulating the evolution of a
sparse Hamiltonian H for a given time t in terms of a procedure for computing
the matrix entries of H. In particular, when H acts on n qubits, has at most a
constant number of nonzero entries in each row/column, and |H| is bounded by a
constant, we may select any positive integer such that the simulation
requires O((\log^*n)t^{1+1/2k}) accesses to matrix entries of H. We show that
the temporal scaling cannot be significantly improved beyond this, because
sublinear time scaling is not possible.Comment: 9 pages, 2 figures, substantial revision
Large violation of Bell inequalities with low entanglement
In this paper we obtain violations of general bipartite Bell inequalities of
order with inputs, outputs and
-dimensional Hilbert spaces. Moreover, we construct explicitly, up to a
random choice of signs, all the elements involved in such violations: the
coefficients of the Bell inequalities, POVMs measurements and quantum states.
Analyzing this construction we find that, even though entanglement is necessary
to obtain violation of Bell inequalities, the Entropy of entanglement of the
underlying state is essentially irrelevant in obtaining large violation. We
also indicate why the maximally entangled state is a rather poor candidate in
producing large violations with arbitrary coefficients. However, we also show
that for Bell inequalities with positive coefficients (in particular, games)
the maximally entangled state achieves the largest violation up to a
logarithmic factor.Comment: Reference [16] added. Some typos correcte
Mol. Cell. Proteomics
The term âproteomicsâ encompasses the large-scale detection and analysis of proteins and their post-translational modifications. Driven by major improvements in mass spectrometric instrumentation, methodology, and data analysis, the proteomics field has burgeoned in recent years. It now provides a range of sensitive and quantitative approaches for measuring protein structures and dynamics that promise to revolutionize our understanding of cell biology and molecular mechanisms in both human cells and model organisms. The Proteomics Specification in Time and Space (PROSPECTS) Network is a unique EU-funded project that brings together leading European research groups, spanning from instrumentation to biomedicine, in a collaborative five year initiative to develop new methods and applications for the functional analysis of cellular proteins. This special issue of Molecular and Cellular Proteomics presents 16 research papers reporting major recent progress by the PROSPECTS groups, including improvements to the resolution and sensitivity of the Orbitrap family of mass spectrometers, systematic detection of proteins using highly characterized antibody collections, and new methods for absolute as well as relative quantification of protein levels. Manuscripts in this issue exemplify approaches for performing quantitative measurements of cell proteomes and for studying their dynamic responses to perturbation, both during normal cellular responses and in disease mechanisms. Here we present a perspective on how the proteomics field is moving beyond simply identifying proteins with high sensitivity toward providing a powerful and versatile set of assay systems for characterizing proteome dynamics and thereby creating a new âthird generationâ proteomics strategy that offers an indispensible tool for cell biology and molecular medicine
Locality in Distributed Graph Algorithms
International audienceSurvey of core results in the context of locality in distributed graph algorithms
Eigenvectors of the discrete Laplacian on regular graphs - a statistical approach
In an attempt to characterize the structure of eigenvectors of random regular
graphs, we investigate the correlations between the components of the
eigenvectors associated to different vertices. In addition, we provide
numerical observations, suggesting that the eigenvectors follow a Gaussian
distribution. Following this assumption, we reconstruct some properties of the
nodal structure which were observed in numerical simulations, but were not
explained so far. We also show that some statistical properties of the nodal
pattern cannot be described in terms of a percolation model, as opposed to the
suggested correspondence for eigenvectors of 2 dimensional manifolds.Comment: 28 pages, 11 figure
- âŠ