372,841 research outputs found

    Quantum Measurement Theory in Gravitational-Wave Detectors

    Get PDF
    The fast progress in improving the sensitivity of the gravitational-wave (GW) detectors, we all have witnessed in the recent years, has propelled the scientific community to the point, when quantum behaviour of such immense measurement devices as kilometer-long interferometers starts to matter. The time, when their sensitivity will be mainly limited by the quantum noise of light is round the corner, and finding the ways to reduce it will become a necessity. Therefore, the primary goal we pursued in this review was to familiarize a broad spectrum of readers with the theory of quantum measurements in the very form it finds application in the area of gravitational-wave detection. We focus on how quantum noise arises in gravitational-wave interferometers and what limitations it imposes on the achievable sensitivity. We start from the very basic concepts and gradually advance to the general linear quantum measurement theory and its application to the calculation of quantum noise in the contemporary and planned interferometric detectors of gravitational radiation of the first and second generation. Special attention is paid to the concept of Standard Quantum Limit and the methods of its surmounting.Comment: 147 pages, 46 figures, 1 table. Published in Living Reviews in Relativit

    Probabilistic DCS: An RFID reader-to-reader anti-collision protocol

    Get PDF
    The wide adoption of radio frequency identification (RFID) for applications requiring a large number of tags and readers makes critical the reader-to-reader collision problem. Various anti-collision protocols have been proposed, but the majority require considerable additional resources and costs. Distributed color system (DCS) is a state-of-the-art protocol based on time division, without noteworthy additional requirements. This paper presents the probabilistic DCS (PDCS) reader-to-reader anti-collision protocol which employs probabilistic collision resolution. Differently from previous time division protocols, PDCS allows multichannel transmissions, according to international RFID regulations. A theoretical analysis is provided in order to clearly identify the behavior of the additional parameter representing the probability. The proposed protocol maintains the features of DCS, achieving more efficiency. Theoretical analysis demonstrates that the number of reader-to-reader collisions after a slot change is decreased by over 30%. The simulation analysis validates the theoretical results, and shows that PDCS reaches better performance than state-of-the-art reader-to-reader anti-collision protocol

    Precision and mathematical form in first and subsequent mentions of numerical facts and their relation to document structure

    Get PDF
    In a corpus study we found that authors vary both mathematical form and precision when expressing numerical quantities. Indeed, within the same document, a quantity is often described vaguely in some places and more accurately in others. Vague descriptions tend to occur early in a document and to be expressed in simpler mathematical forms (e.g., fractions or ratios), whereas more accurate descriptions of the same proportions tend to occur later, often expressed in more complex forms (e.g., decimal percentages). Our results can be used in Natural Language Generation (1) to generate repeat descriptions within the same document, and (2) to generate descriptions of numerical quantities for different audiences according to mathematical ability

    Causal Consistency and Latency Optimality: Friend or Foe?

    Full text link
    Causal consistency is an attractive consistency model for replicated data stores. It is provably the strongest model that tolerates partitions, it avoids the long latencies associated with strong consistency, and, especially when using read-only transactions, it prevents many of the anomalies of weaker consistency models. Recent work has shown that causal consistency allows "latency-optimal" read-only transactions, that are nonblocking, single-version and single-round in terms of communication. On the surface, this latency optimality is very appealing, as the vast majority of applications are assumed to have read-dominated workloads. In this paper, we show that such "latency-optimal" read-only transactions induce an extra overhead on writes, the extra overhead is so high that performance is actually jeopardized, even in read-dominated workloads. We show this result from a practical and a theoretical angle. First, we present a protocol that implements "almost laten- cy-optimal" ROTs but does not impose on the writes any of the overhead of latency-optimal protocols. In this protocol, ROTs are nonblocking, one version and can be configured to use either two or one and a half rounds of client-server communication. We experimentally show that this protocol not only provides better throughput, as expected, but also surprisingly better latencies for all but the lowest loads and most read-heavy workloads. Then, we prove that the extra overhead imposed on writes by latency-optimal read-only transactions is inherent, i.e., it is not an artifact of the design we consider, and cannot be avoided by any implementation of latency-optimal read-only transactions. We show in particular that this overhead grows linearly with the number of clients

    Feasibility Study of RFID Technology for Construction Load Tracking

    Get PDF
    INE/AUTC 10.0

    Translationese and post-editese : how comparable is comparable quality?

    Get PDF
    Whereas post-edited texts have been shown to be either of comparable quality to human translations or better, one study shows that people still seem to prefer human-translated texts. The idea of texts being inherently different despite being of high quality is not new. Translated texts, for example,are also different from original texts, a phenomenon referred to as ‘Translationese’. Research into Translationese has shown that, whereas humans cannot distinguish between translated and original text,computers have been trained to detect Translationesesuccessfully. It remains to be seen whether the same can be done for what we call Post-editese. We first establish whether humans are capable of distinguishing post-edited texts from human translations, and then establish whether it is possible to build a supervised machine-learning model that can distinguish between translated and post-edited text
    corecore