60 research outputs found

    Evacuating Two Robots from a Disk: A Second Cut

    Full text link
    We present an improved algorithm for the problem of evacuating two robots from the unit disk via an unknown exit on the boundary. Robots start at the center of the disk, move at unit speed, and can only communicate locally. Our algorithm improves previous results by Brandt et al. [CIAC'17] by introducing a second detour through the interior of the disk. This allows for an improved evacuation time of 5.62345.6234. The best known lower bound of 5.2555.255 was shown by Czyzowicz et al. [CIAC'15].Comment: 19 pages, 5 figures. This is the full version of the paper with the same title accepted in the 26th International Colloquium on Structural Information and Communication Complexity (SIROCCO'19

    Noise-Resilient Group Testing: Limitations and Constructions

    Full text link
    We study combinatorial group testing schemes for learning dd-sparse Boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noise-resilient scheme in this model can only approximately reconstruct the sparse vector. On the positive side, we take this barrier to our advantage and show that approximate reconstruction (within a satisfactory degree of approximation) allows us to break the information theoretic lower bound of Ω~(d2log⁥n)\tilde{\Omega}(d^2 \log n) that is known for exact reconstruction of dd-sparse vectors of length nn via non-adaptive measurements, by a multiplicative factor Ω~(d)\tilde{\Omega}(d). Specifically, we give simple randomized constructions of non-adaptive measurement schemes, with m=O(dlog⁥n)m=O(d \log n) measurements, that allow efficient reconstruction of dd-sparse vectors up to O(d)O(d) false positives even in the presence of Ύm\delta m false positives and O(m/d)O(m/d) false negatives within the measurement outcomes, for any constant Ύ<1\delta < 1. We show that, information theoretically, none of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit constructions, in particular one matching the randomized trade-off but using m=O(d1+o(1)log⁥n)m = O(d^{1+o(1)} \log n) measurements. We also obtain explicit constructions that allow fast reconstruction in time \poly(m), which would be sublinear in nn for sufficiently sparse vectors. The main tool used in our construction is the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the same title) in proceedings of the 17th International Symposium on Fundamentals of Computation Theory (FCT 2009

    Gathering in Dynamic Rings

    Full text link
    The gathering problem requires a set of mobile agents, arbitrarily positioned at different nodes of a network to group within finite time at the same location, not fixed in advanced. The extensive existing literature on this problem shares the same fundamental assumption: the topological structure does not change during the rendezvous or the gathering; this is true also for those investigations that consider faulty nodes. In other words, they only consider static graphs. In this paper we start the investigation of gathering in dynamic graphs, that is networks where the topology changes continuously and at unpredictable locations. We study the feasibility of gathering mobile agents, identical and without explicit communication capabilities, in a dynamic ring of anonymous nodes; the class of dynamics we consider is the classic 1-interval-connectivity. We focus on the impact that factors such as chirality (i.e., a common sense of orientation) and cross detection (i.e., the ability to detect, when traversing an edge, whether some agent is traversing it in the other direction), have on the solvability of the problem. We provide a complete characterization of the classes of initial configurations from which the gathering problem is solvable in presence and in absence of cross detection and of chirality. The feasibility results of the characterization are all constructive: we provide distributed algorithms that allow the agents to gather. In particular, the protocols for gathering with cross detection are time optimal. We also show that cross detection is a powerful computational element. We prove that, without chirality, knowledge of the ring size is strictly more powerful than knowledge of the number of agents; on the other hand, with chirality, knowledge of n can be substituted by knowledge of k, yielding the same classes of feasible initial configurations

    Can One Trust Quantum Simulators?

    Full text link
    Various fundamental phenomena of strongly-correlated quantum systems such as high-TcT_c superconductivity, the fractional quantum-Hall effect, and quark confinement are still awaiting a universally accepted explanation. The main obstacle is the computational complexity of solving even the most simplified theoretical models that are designed to capture the relevant quantum correlations of the many-body system of interest. In his seminal 1982 paper [Int. J. Theor. Phys. 21, 467], Richard Feynman suggested that such models might be solved by "simulation" with a new type of computer whose constituent parts are effectively governed by a desired quantum many-body dynamics. Measurements on this engineered machine, now known as a "quantum simulator," would reveal some unknown or difficult to compute properties of a model of interest. We argue that a useful quantum simulator must satisfy four conditions: relevance, controllability, reliability, and efficiency. We review the current state of the art of digital and analog quantum simulators. Whereas so far the majority of the focus, both theoretically and experimentally, has been on controllability of relevant models, we emphasize here the need for a careful analysis of reliability and efficiency in the presence of imperfections. We discuss how disorder and noise can impact these conditions, and illustrate our concerns with novel numerical simulations of a paradigmatic example: a disordered quantum spin chain governed by the Ising model in a transverse magnetic field. We find that disorder can decrease the reliability of an analog quantum simulator of this model, although large errors in local observables are introduced only for strong levels of disorder. We conclude that the answer to the question "Can we trust quantum simulators?" is... to some extent.Comment: 20 pages. Minor changes with respect to version 2 (some additional explanations, added references...

    Biallelic loss-of-function variants in PLD1 cause congenital right-sided cardiac valve defects and neonatal cardiomyopathy

    Get PDF
    Congenital heart disease is the most common type of birth defect, accounting for one-third of all congenital anomalies. Using whole-exome sequencing of 2718 patients with congenital heart disease and a search in GeneMatcher, we identified 30 patients from 21 unrelated families of different ancestries with biallelic phospholipase D1 (PLD1) variants who presented predominantly with congenital cardiac valve defects. We also associated recessive PLD1 variants with isolated neonatal cardiomyopathy. Furthermore, we established that p.I668F is a founder variant among Ashkenazi Jews (allele frequency of ~2%) and describe the phenotypic spectrum of PLD1-associated congenital heart defects. PLD1 missense variants were overrepresented in regions of the protein critical for catalytic activity, and, correspondingly, we observed a strong reduction in enzymatic activity for most of the mutant proteins in an enzymatic assay. Finally, we demonstrate that PLD1 inhibition decreased endothelial-mesenchymal transition, an established pivotal early step in valvulogenesis. In conclusion, our study provides a more detailed understanding of disease mechanisms and phenotypic expression associated with PLD1 loss of function

    Non-Malleability against Polynomial Tampering

    Get PDF
    We present the first explicit construction of a non-malleable code that can handle tampering functions that are bounded-degree polynomials. Prior to our work, this was only known for degree-1 polynomials (affine tampering functions), due to Chattopadhyay and Li (STOC 2017). As a direct corollary, we obtain an explicit non-malleable code that is secure against tampering by bounded-size arithmetic circuits. We show applications of our non-malleable code in constructing non-malleable secret sharing schemes that are robust against bounded-degree polynomial tampering. In fact our result is stronger: we can handle adversaries that can adaptively choose the polynomial tampering function based on initial leakage of a bounded number of shares. Our results are derived from explicit constructions of seedless non-malleable extractors that can handle bounded-degree polynomial tampering functions. Prior to our work, no such result was known even for degree-2 (quadratic) polynomials

    Quantum key distribution based on orthogonal states allows secure quantum bit commitment

    Full text link
    For more than a decade, it was believed that unconditionally secure quantum bit commitment (QBC) is impossible. But basing on a previously proposed quantum key distribution scheme using orthogonal states, here we build a QBC protocol in which the density matrices of the quantum states encoding the commitment do not satisfy a crucial condition on which the no-go proofs of QBC are based. Thus the no-go proofs could be evaded. Our protocol is fault-tolerant and very feasible with currently available technology. It reopens the venue for other "post-cold-war" multi-party cryptographic protocols, e.g., quantum bit string commitment and quantum strong coin tossing with an arbitrarily small bias. This result also has a strong influence on the Clifton-Bub-Halvorson theorem which suggests that quantum theory could be characterized in terms of information-theoretic constraints.Comment: Published version plus an appendix showing how to defeat the counterfactual attack, more references [76,77,90,118-120] cited, and other minor change

    Biallelic loss-of-function variants in PLD1 cause congenital right-sided cardiac valve defects and neonatal cardiomyopathy

    Get PDF
    Congenital heart disease is the most common type of birth defect, accounting for one-third of all congenital anomalies. Using whole-exome sequencing of 2718 patients with congenital heart disease and a search in GeneMatcher, we identified 30 patients from 21 unrelated families of different ancestries with biallelic phospholipase D1 (PLD1) variants who presented predominantly with congenital cardiac valve defects. We also associated recessive PLD1 variants with isolated neonatal cardiomyopathy. Furthermore, we established that p.1668F is a founder variant among Ashkenazi Jews (allele frequency of -.2%) and describe the phenotypic spectrum of PLD1-associated congenital heart defects. PLD1 missense variants were overrepresented in regions of the protein critical for catalytic activity, and, correspondingly, we observed a strong reduction in enzymatic activity for most of the mutant proteins in an enzymatic assay. Finally, we demonstrate that PLD1 inhibition decreased endothelial-mesenchymal transition, an established pivotal early step in valvulogenesis. In conclusion, our study provides a more detailed understanding of disease mechanisms and phenotypic expression associated with PLD1 loss of function.Genetics of disease, diagnosis and treatmen

    List Decoding and Pseudorandom Constructions

    No full text

    An Improved Analysis of Mergers

    No full text
    Mergers are functions that transform k (possibly dependent) random sources into a single random source, in a way that ensures that if one of the input sources has min-entropy rate ÎŽ then the output has min-entropy rate close to ÎŽ. Mergers have proven to be a very useful tool in explicit constructions of extractors and condensers, and are also interesting objects in their own right. In this work we present a new analysis of the merger construction of [LRVW03]. Our analysis shows that the min-entropy rate of this merger’s output is actually 0.52 · ÎŽ instead of 0.5 · ÎŽ, where ÎŽ is the min-entropy rate of one of the inputs. To obtain this result we deviate from the usual linear algebra methods that were used by [LRVW03] and introduce a new technique that involves results from additive number theory
    • 

    corecore