7,966 research outputs found

    Better text compression from fewer lexical n-grams

    Get PDF
    Word-based context models for text compression have the capacity to outperform more simple character-based models, but are generally unattractive because of inherent problems with exponential model growth and corresponding data sparseness. These ill-effects can be mitigated in an adaptive lossless compression scheme by modelling syntactic and semantic lexical dependencies independently

    The relationship between measurement uncertainty and reporting interval

    Get PDF
    Background Measurement uncertainty (MU) estimates can be used by clinicians in result interpretation for diagnosis and monitoring and by laboratories in assessing assay fitness for use and analytical troubleshooting. However, MU is not routinely used to assess the appropriateness of the analyte reporting interval. We describe the relationship between MU and the analyte reporting interval. Methods and results The reporting interval R is the smallest unit of measurement chosen for clinical reporting. When choosing the appropriate value for R, it is necessary that the reference change values and expanded MU values can be meaningfully calculated. Expanded MU provides the tighter criterion for defining an upper limit for R. This limit can be determined as R ≤  k·SDa/1.9, where SDa is the analytical standard deviation and k is the coverage factor (usually 2). Conclusion Using MU estimates to determine the reporting interval for quantitative laboratory results ensures that reporting practices match local analytical performance and recognizes the inherent error of the measurement process. </jats:sec

    Pattern formation of quantum jumps with Rydberg atoms

    Get PDF
    We study the nonequilibrium dynamics of quantum jumps in a one-dimensional chain of atoms. Each atom is driven on a strong transition to a short-lived state and on a weak transition to a metastable state. We choose the metastable state to be a Rydberg state so that when an atom jumps to the Rydberg state, it inhibits or enhances jumps in the neighboring atoms. This leads to rich spatiotemporal dynamics that are visible in the fluorescence of the strong transition. It also allows one to dissipatively prepare Rydberg crystals

    Hidden Variables or Positive Probabilities?

    Full text link
    Despite claims that Bell's inequalities are based on the Einstein locality condition, or equivalent, all derivations make an identical mathematical assumption: that local hidden-variable theories produce a set of positive-definite probabilities for detecting a particle with a given spin orientation. The standard argument is that because quantum mechanics assumes that particles are emitted in a superposition of states the theory cannot produce such a set of probabilities. We examine a paper by Eberhard, and several similar papers, which claim to show that a generalized Bell inequality, the CHSH inequality, can be derived solely on the basis of the locality condition, without recourse to hidden variables. We point out that these authors nonetheless assumes a set of positive-definite probabilities, which supports the claim that hidden variables or "locality" is not at issue here, positive-definite probabilities are. We demonstrate that quantum mechanics does predict a set of probabilities that violate the CHSH inequality; however these probabilities are not positive-definite. Nevertheless, they are physically meaningful in that they give the usual quantum-mechanical predictions in physical situations. We discuss in what sense our results are related to the Wigner distribution.Comment: 19 pages, 2 ps files This is a second replacement. In this version we include an analysis of yet another version of Bell's theorem which has been brought to our attention. We also discuss in what sense our results are related to the Wigner distributio

    Automatically linking MEDLINE abstracts to the Gene Ontology

    Get PDF
    Much has been written recently about the need for effective tools and methods for mining the wealth of information present in biomedical literature (Mack and Hehenberger, 2002; Blagosklonny and Pardee, 2001; Rindflesch et al., 2002)—the activity of conceptual biology. Keyword search engines operating over large electronic document stores (such as PubMed and the PNAS) offer some help, but there are fundamental obstacles that limit their effectiveness. In the first instance, there is no general consensus among scientists about the vernacular to be used when describing research about genes, proteins, drugs, diseases, tissues and therapies, making it very difficult to formulate a search query that retrieves the right documents. Secondly, finding relevant articles is just one aspect of the investigative process. A more fundamental goal is to establish links and relationships between facts existing in published literature in order to “validate current hypotheses or to generate new ones” (Barnes and Robertson, 2002)—something keyword search engines do little to support

    Multi-argument classification for semantic role labeling

    Get PDF
    This paper describes a Multi-Argument Classification (MAC) approach to Semantic Role Labeling. The goal is to exploit dependencies between semantic roles by simultaneously classifying all arguments as a pattern. Argument identification, as a pre-processing stage, is carried at using the improved Predicate-Argument Recognition Algorithm (PARA) developed by Lin and Smith (2006). Results using standard evaluation metrics show that multi-argument classification, archieving 76.60 in F₁ measurement on WSJ 23, outperforms existing systems that use a single parse tree for the CoNLL 2005 shared task data. This paper also describes ways to significantly increase the speed of multi-argument classification, making it suitable for real-time language processing tasks that require semantic role labelling

    The Badhwar-O'Neill 2020 Model

    Get PDF
    The Badhwar-O'Neill (BON) model has been used for some time to describe the galactic cosmic ray (GCR) environment encountered in deep space by astronauts and sensitive electronics. The most recent version of the model, BON2014, was calibrated to available measurements to reduce model errors for particles and energies of significance to astronaut exposure. Although subsequent studies showed the model to be reasonably accurate for such applications, modifications to the sunspot number (SSN) classification system and a large number of new high precision measurements suggested the need to develop an improved and more capable model. In this work, the BON2020 model is described. The new model relies on daily integral flux from the Advanced Composition Explorer Cosmic Ray Isotope Spectrometer (ACE/CRIS) to describe solar activity. For time periods not covered by ACE/CRIS, the updated international SSN database is used. Parameters in the new model are calibrated to available data, which includes the new Alpha Magnetic Spectrometer (AMS-02) and Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) high-precision measurements. It is found that the BON2020 model is a significant improvement over BON2014. Systematic errors associated with BON2014 have been removed. The average relative error of the BON2020 model compared to all available measurements is found to be <1%, and BON2020 is found to be within 15% of a large fraction of the available measurements (26,269 of 27,646 95%)
    corecore