265 research outputs found

    Lower Bounds for Structuring Unreliable Radio Networks

    Full text link
    In this paper, we study lower bounds for randomized solutions to the maximal independent set (MIS) and connected dominating set (CDS) problems in the dual graph model of radio networks---a generalization of the standard graph-based model that now includes unreliable links controlled by an adversary. We begin by proving that a natural geographic constraint on the network topology is required to solve these problems efficiently (i.e., in time polylogarthmic in the network size). We then prove the importance of the assumption that nodes are provided advance knowledge of their reliable neighbors (i.e, neighbors connected by reliable links). Combined, these results answer an open question by proving that the efficient MIS and CDS algorithms from [Censor-Hillel, PODC 2011] are optimal with respect to their dual graph model assumptions. They also provide insight into what properties of an unreliable network enable efficient local computation.Comment: An extended abstract of this work appears in the 2014 proceedings of the International Symposium on Distributed Computing (DISC

    Computing in Additive Networks with Bounded-Information Codes

    Full text link
    This paper studies the theory of the additive wireless network model, in which the received signal is abstracted as an addition of the transmitted signals. Our central observation is that the crucial challenge for computing in this model is not high contention, as assumed previously, but rather guaranteeing a bounded amount of \emph{information} in each neighborhood per round, a property that we show is achievable using a new random coding technique. Technically, we provide efficient algorithms for fundamental distributed tasks in additive networks, such as solving various symmetry breaking problems, approximating network parameters, and solving an \emph{asymmetry revealing} problem such as computing a maximal input. The key method used is a novel random coding technique that allows a node to successfully decode the received information, as long as it does not contain too many distinct values. We then design our algorithms to produce a limited amount of information in each neighborhood in order to leverage our enriched toolbox for computing in additive networks

    What Affects Social Attention? Social Presence, Eye Contact and Autistic Traits

    Get PDF
    Social understanding is facilitated by effectively attending to other people and the subtle social cues they generate. In order to more fully appreciate the nature of social attention and what drives people to attend to social aspects of the world, one must investigate the factors that influence social attention. This is especially important when attempting to create models of disordered social attention, e.g. a model of social attention in autism. Here we analysed participants' viewing behaviour during one-to-one social interactions with an experimenter. Interactions were conducted either live or via video (social presence manipulation). The participant was asked and then required to answer questions. Experimenter eye-contact was either direct or averted. Additionally, the influence of participant self-reported autistic traits was also investigated. We found that regardless of whether the interaction was conducted live or via a video, participants frequently looked at the experimenter's face, and they did this more often when being asked a question than when answering. Critical differences in social attention between the live and video interactions were also observed. Modifications of experimenter eye contact influenced participants' eye movements in the live interaction only; and increased autistic traits were associated with less looking at the experimenter for video interactions only. We conclude that analysing patterns of eye-movements in response to strictly controlled video stimuli and natural real-world stimuli furthers the field's understanding of the factors that influence social attention. © 2013 Freeth et al

    Compressed Membership for NFA (DFA) with Compressed Labels is in NP (P)

    Get PDF
    In this paper, a compressed membership problem for finite automata, both deterministic and non-deterministic, with compressed transition labels is studied. The compression is represented by straight-line programs (SLPs), i.e. context-free grammars generating exactly one string. A novel technique of dealing with SLPs is introduced: the SLPs are recompressed, so that substrings of the input text are encoded in SLPs labelling the transitions of the NFA (DFA) in the same way, as in the SLP representing the input text. To this end, the SLPs are locally decompressed and then recompressed in a uniform way. Furthermore, such recompression induces only small changes in the automaton, in particular, the size of the automaton remains polynomial. Using this technique it is shown that the compressed membership for NFA with compressed labels is in NP, thus confirming the conjecture of Plandowski and Rytter and extending the partial result of Lohrey and Mathissen; as it is already known, that this problem is NP-hard, we settle its exact computational complexity. Moreover, the same technique applied to the compressed membership for DFA with compressed labels yields that this problem is in P; for this problem, only trivial upper-bound PSPACE was known

    Artificial Sequences and Complexity Measures

    Get PDF
    In this paper we exploit concepts of information theory to address the fundamental problem of identifying and defining the most suitable tools to extract, in a automatic and agnostic way, information from a generic string of characters. We introduce in particular a class of methods which use in a crucial way data compression techniques in order to define a measure of remoteness and distance between pairs of sequences of characters (e.g. texts) based on their relative information content. We also discuss in detail how specific features of data compression techniques could be used to introduce the notion of dictionary of a given sequence and of Artificial Text and we show how these new tools can be used for information extraction purposes. We point out the versatility and generality of our method that applies to any kind of corpora of character strings independently of the type of coding behind them. We consider as a case study linguistic motivated problems and we present results for automatic language recognition, authorship attribution and self consistent-classification.Comment: Revised version, with major changes, of previous "Data Compression approach to Information Extraction and Classification" by A. Baronchelli and V. Loreto. 15 pages; 5 figure

    Double-beta decay of 130^{130}Te to the first 0+^{+} excited state of 130^{130}Xe with CUORICINO

    Get PDF
    The CUORICINO experiment was an array of 62 TeO2_{2} single-crystal bolometers with a total 130^{130}Te mass of 11.311.3\,kg. The experiment finished in 2008 after more than 3 years of active operating time. Searches for both 0ν0\nu and 2ν2\nu double-beta decay to the first excited 0+0^{+} state in 130^{130}Xe were performed by studying different coincidence scenarios. The analysis was based on data representing a total exposure of N(130^{130}Te)\cdott=9.5×10259.5\times10^{25}\,y. No evidence for a signal was found. The resulting lower limits on the half lives are T1/22ν(130Te130Xe)>1.3×1023T^{2\nu}_{1/2}(^{130} Te\rightarrow^{130} Xe^{*})>1.3\times10^{23}\,y (90% C.L.), and T1/20ν(130Te130Xe)>9.4×1023T^{0\nu}_{1/2}(^{130} Te\rightarrow^{130} Xe^{*})>9.4\times10^{23}\,y (90% C.L.).Comment: 6 pages, 4 figure

    First results from the CERN Axion Solar Telescope (CAST)

    Full text link
    Hypothetical axion-like particles with a two-photon interaction would be produced in the Sun by the Primakoff process. In a laboratory magnetic field (``axion helioscope'') they would be transformed into X-rays with energies of a few keV. Using a decommissioned LHC test magnet, CAST has been running for about 6 months during 2003. The first results from the analysis of these data are presented here. No signal above background was observed, implying an upper limit to the axion-photon coupling < 1.16 10^{-10} GeV^-1 at 95% CL for m_a <~0.02 eV. This limit is comparable to the limit from stellar energy-loss arguments and considerably more restrictive than any previous experiment in this axion mass range.Comment: 4 pages, accepted by PRL. Final version after the referees comment

    CUORE and beyond: bolometric techniques to explore inverted neutrino mass hierarchy

    Get PDF
    The CUORE (Cryogenic Underground Observatory for Rare Events) experiment will search for neutrinoless double beta decay of 130^{130}Te. With 741 kg of TeO2_2 crystals and an excellent energy resolution of 5 keV (0.2%) at the region of interest, CUORE will be one of the most competitive neutrinoless double beta decay experiments on the horizon. With five years of live time, CUORE projected neutrinoless double beta decay half-life sensitivity is 1.6×10261.6\times 10^{26} y at 1σ1\sigma (9.5×10259.5\times10^{25} y at the 90% confidence level), which corresponds to an upper limit on the effective Majorana mass in the range 40--100 meV (50--130 meV). Further background rejection with auxiliary light detector can significantly improve the search sensitivity and competitiveness of bolometric detectors to fully explore the inverted neutrino mass hierarchy with 130^{130}Te and possibly other double beta decay candidate nuclei.Comment: Submitted to the Proceedings of TAUP 2013 Conferenc
    corecore