4,149 research outputs found

    Vertex Ramsey problems in the hypercube

    Get PDF
    If we 2-color the vertices of a large hypercube what monochromatic substructures are we guaranteed to find? Call a set S of vertices from Q_d, the d-dimensional hypercube, Ramsey if any 2-coloring of the vertices of Q_n, for n sufficiently large, contains a monochromatic copy of S. Ramsey's theorem tells us that for any r \geq 1 every 2-coloring of a sufficiently large r-uniform hypergraph will contain a large monochromatic clique (a complete subhypergraph): hence any set of vertices from Q_d that all have the same weight is Ramsey. A natural question to ask is: which sets S corresponding to unions of cliques of different weights from Q_d are Ramsey? The answer to this question depends on the number of cliques involved. In particular we determine which unions of 2 or 3 cliques are Ramsey and then show, using a probabilistic argument, that any non-trivial union of 39 or more cliques of different weights cannot be Ramsey. A key tool is a lemma which reduces questions concerning monochromatic configurations in the hypercube to questions about monochromatic translates of sets of integers.Comment: 26 pages, 3 figure

    Improving Natural Language Interaction with Robots Using Advice

    Full text link
    Over the last few years, there has been growing interest in learning models for physically grounded language understanding tasks, such as the popular blocks world domain. These works typically view this problem as a single-step process, in which a human operator gives an instruction and an automated agent is evaluated on its ability to execute it. In this paper we take the first step towards increasing the bandwidth of this interaction, and suggest a protocol for including advice, high-level observations about the task, which can help constrain the agent's prediction. We evaluate our approach on the blocks world task, and show that even simple advice can help lead to significant performance improvements. To help reduce the effort involved in supplying the advice, we also explore model self-generated advice which can still improve results.Comment: Accepted as a short paper at NAACL 2019 (8 pages

    Applications of a tunable dye laser for atomic fluoresence spectrometry

    Get PDF
    Imperial Users onl

    Ask, and shall you receive?: Understanding Desire Fulfillment in Natural Language Text

    Full text link
    The ability to comprehend wishes or desires and their fulfillment is important to Natural Language Understanding. This paper introduces the task of identifying if a desire expressed by a subject in a given short piece of text was fulfilled. We propose various unstructured and structured models that capture fulfillment cues such as the subject's emotional state and actions. Our experiments with two different datasets demonstrate the importance of understanding the narrative and discourse structure to address this task

    Adaptively Secure Coin-Flipping, Revisited

    Full text link
    The full-information model was introduced by Ben-Or and Linial in 1985 to study collective coin-flipping: the problem of generating a common bounded-bias bit in a network of nn players with t=t(n)t=t(n) faults. They showed that the majority protocol can tolerate t=O(n)t=O(\sqrt n) adaptive corruptions, and conjectured that this is optimal in the adaptive setting. Lichtenstein, Linial, and Saks proved that the conjecture holds for protocols in which each player sends a single bit. Their result has been the main progress on the conjecture in the last 30 years. In this work we revisit this question and ask: what about protocols involving longer messages? Can increased communication allow for a larger fraction of faulty players? We introduce a model of strong adaptive corruptions, where in each round, the adversary sees all messages sent by honest parties and, based on the message content, decides whether to corrupt a party (and intercept his message) or not. We prove that any one-round coin-flipping protocol, regardless of message length, is secure against at most O~(n)\tilde{O}(\sqrt n) strong adaptive corruptions. Thus, increased message length does not help in this setting. We then shed light on the connection between adaptive and strongly adaptive adversaries, by proving that for any symmetric one-round coin-flipping protocol secure against tt adaptive corruptions, there is a symmetric one-round coin-flipping protocol secure against tt strongly adaptive corruptions. Returning to the standard adaptive model, we can now prove that any symmetric one-round protocol with arbitrarily long messages can tolerate at most O~(n)\tilde{O}(\sqrt n) adaptive corruptions. At the heart of our results lies a novel use of the Minimax Theorem and a new technique for converting any one-round secure protocol into a protocol with messages of polylog(n)polylog(n) bits. This technique may be of independent interest

    Pseudo-Deterministic Streaming

    Get PDF
    A pseudo-deterministic algorithm is a (randomized) algorithm which, when run multiple times on the same input, with high probability outputs the same result on all executions. Classic streaming algorithms, such as those for finding heavy hitters, approximate counting, ?_2 approximation, finding a nonzero entry in a vector (for turnstile algorithms) are not pseudo-deterministic. For example, in the instance of finding a nonzero entry in a vector, for any known low-space algorithm A, there exists a stream x so that running A twice on x (using different randomness) would with high probability result in two different entries as the output. In this work, we study whether it is inherent that these algorithms output different values on different executions. That is, we ask whether these problems have low-memory pseudo-deterministic algorithms. For instance, we show that there is no low-memory pseudo-deterministic algorithm for finding a nonzero entry in a vector (given in a turnstile fashion), and also that there is no low-dimensional pseudo-deterministic sketching algorithm for ?_2 norm estimation. We also exhibit problems which do have low memory pseudo-deterministic algorithms but no low memory deterministic algorithm, such as outputting a nonzero row of a matrix, or outputting a basis for the row-span of a matrix. We also investigate multi-pseudo-deterministic algorithms: algorithms which with high probability output one of a few options. We show the first lower bounds for such algorithms. This implies that there are streaming problems such that every low space algorithm for the problem must have inputs where there are many valid outputs, all with a significant probability of being outputted

    Students Have Their Own Minds. A Response to “Beyond the Catch-22 of School-Based Social Action Programs: Toward a More Pragmatic Approach for Dealing with Power”

    Get PDF
    In response to the authors’ work on finding a more pragmatic approach to dealing with power, this commentary calls into question the possibility of a preestablished agenda by the researchers, who struggled to engage high school students. There might have been a case of overly ambitious expectations at work; also, the authors confess to being in the school only once a week and that their students were themselves struggling to find their place in a new charter school with an emphasis on social action. This response challenges the authors to reexamine their wish to engage students with institutional power by suggesting that they consider their own positions of power inside the school and classroom. Lastly, the response posits that rather than focusing on the limitations of service-learning and/or public achievement, which may make them appear as less desirable models for social action, we should consider such approaches as providing the very thing—small wins—the authors sought in and that educators should prepare their students for more substantial engagements with power

    Near-Linear Time Insertion-Deletion Codes and (1+Îľ\varepsilon)-Approximating Edit Distance via Indexing

    Full text link
    We introduce fast-decodable indexing schemes for edit distance which can be used to speed up edit distance computations to near-linear time if one of the strings is indexed by an indexing string II. In particular, for every length nn and every ε>0\varepsilon >0, one can in near linear time construct a string I∈Σ′nI \in \Sigma'^n with ∣Σ′∣=Oε(1)|\Sigma'| = O_{\varepsilon}(1), such that, indexing any string S∈ΣnS \in \Sigma^n, symbol-by-symbol, with II results in a string S′∈Σ′′nS' \in \Sigma''^n where Σ′′=Σ×Σ′\Sigma'' = \Sigma \times \Sigma' for which edit distance computations are easy, i.e., one can compute a (1+ε)(1+\varepsilon)-approximation of the edit distance between S′S' and any other string in O(npoly(log⁡n))O(n \text{poly}(\log n)) time. Our indexing schemes can be used to improve the decoding complexity of state-of-the-art error correcting codes for insertions and deletions. In particular, they lead to near-linear time decoding algorithms for the insertion-deletion codes of [Haeupler, Shahrasbi; STOC `17] and faster decoding algorithms for list-decodable insertion-deletion codes of [Haeupler, Shahrasbi, Sudan; ICALP `18]. Interestingly, the latter codes are a crucial ingredient in the construction of fast-decodable indexing schemes
    • …
    corecore