17 research outputs found

    Laminar-turbulent transition in Raman fiber lasers:a first passage statistics based analysis

    Get PDF
    Loss of coherence with increasing excitation amplitudes and spatial size modulation is a fundamental problem in designing Raman fiber lasers. While it is known that ramping up laser pump power increases the amplitude of stochastic excitations, such higher energy inputs can also lead to a transition from a linearly stable coherent laminar regime to a non-desirable disordered turbulent state. This report presents a new statistical methodology, based on first passage statistics, that classifies lasing regimes in Raman fiber lasers, thereby leading to a fast and highly accurate identification of a strong instability leading to a laminar-turbulent phase transition through a self-consistently defined order parameter. The results have been consistent across a wide range of pump power values, heralding a breakthrough in the non-invasive analysis of fiber laser dynamics

    An efficient record linkage scheme using graphical analysis for identifier error detection

    Get PDF
    Integration of information on individuals (record linkage) is a key problem in healthcare delivery, epidemiology, and "business intelligence" applications. It is now common to be required to link very large numbers of records, often containing various combinations of theoretically unique identifiers, such as NHS numbers, which are both incomplete and error-prone

    Some results on blow up for semilinear parabolic problems

    Get PDF
    The authors describe the asymptotic behavior of blow-up for the semilinear heat equation ut=uxx+f(u) in R×(0,T), with initial data u0(x)>0 in R, where f(u)=up, p>1, or f(u)=eu. A complete description of the types of blow-up patterns and of the corresponding blow-up final-time profiles is given. In the rescaled variables, both are governed by the structure of the Hermite polynomials H2m(y). The H2-behavior is shown to be stable and generic. The existence of H4-behavior is proved. A nontrivial blow-up pattern with a blow-up set of nonzero measure is constructed. Similar results for the absorption equation ut=uxx−up, 0<p<1, are discussed

    Discovering gene annotations in biomedical text databases

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Genes and gene products are frequently annotated with Gene Ontology concepts based on the evidence provided in genomics articles. Manually locating and curating information about a genomic entity from the biomedical literature requires vast amounts of human effort. Hence, there is clearly a need forautomated computational tools to annotate the genes and gene products with Gene Ontology concepts by computationally capturing the related knowledge embedded in textual data.</p> <p>Results</p> <p>In this article, we present an automated genomic entity annotation system, GEANN, which extracts information about the characteristics of genes and gene products in article abstracts from PubMed, and translates the discoveredknowledge into Gene Ontology (GO) concepts, a widely-used standardized vocabulary of genomic traits. GEANN utilizes textual "extraction patterns", and a semantic matching framework to locate phrases matching to a pattern and produce Gene Ontology annotations for genes and gene products.</p> <p>In our experiments, GEANN has reached to the precision level of 78% at therecall level of 61%. On a select set of Gene Ontology concepts, GEANN either outperforms or is comparable to two other automated annotation studies. Use of WordNet for semantic pattern matching improves the precision and recall by 24% and 15%, respectively, and the improvement due to semantic pattern matching becomes more apparent as the Gene Ontology terms become more general.</p> <p>Conclusion</p> <p>GEANN is useful for two distinct purposes: (i) automating the annotation of genomic entities with Gene Ontology concepts, and (ii) providing existing annotations with additional "evidence articles" from the literature. The use of textual extraction patterns that are constructed based on the existing annotations achieve high precision. The semantic pattern matching framework provides a more flexible pattern matching scheme with respect to "exactmatching" with the advantage of locating approximate pattern occurrences with similar semantics. Relatively low recall performance of our pattern-based approach may be enhanced either by employing a probabilistic annotation framework based on the annotation neighbourhoods in textual data, or, alternatively, the statistical enrichment threshold may be adjusted to lower values for applications that put more value on achieving higher recall values.</p

    The Hubbard model within the equations of motion approach

    Full text link
    The Hubbard model has a special role in Condensed Matter Theory as it is considered as the simplest Hamiltonian model one can write in order to describe anomalous physical properties of some class of real materials. Unfortunately, this model is not exactly solved except for some limits and therefore one should resort to analytical methods, like the Equations of Motion Approach, or to numerical techniques in order to attain a description of its relevant features in the whole range of physical parameters (interaction, filling and temperature). In this manuscript, the Composite Operator Method, which exploits the above mentioned analytical technique, is presented and systematically applied in order to get information about the behavior of all relevant properties of the model (local, thermodynamic, single- and two- particle ones) in comparison with many other analytical techniques, the above cited known limits and numerical simulations. Within this approach, the Hubbard model is shown to be also capable to describe some anomalous behaviors of the cuprate superconductors.Comment: 232 pages, more than 300 figures, more than 500 reference

    Evaluation of probabilistic queries over imprecise data in constantly-evolving environments

    No full text
    Sensors are often employed to monitor continuously changing entities like locations of moving objects and temperature. The sensor readings are reported to a database system, and are subsequently used to answer queries. Due to continuous changes in these values and limited resources (e.g., network bandwidth and battery power), the database may not be able to keep track of the actual values of the entities. Queries that use these old values may produce incorrect answers. However, if the degree of uncertainty between the actual data value and the database value is limited, one can place more confidence in the answers to the queries. More generally, query answers can be augmented with probabilistic guarantees of the validity of the answers. In this paper, we study probabilistic query evaluation based on uncertain data. A classification of queries is made based upon the nature of the result set. For each class, we develop algorithms for computing probabilistic answers, and provide efficient indexing and numeric solutions. We address the important issue of measuring the quality of the answers to these queries, and provide algorithms for efficiently pulling data from relevant sensors or moving objects in order to improve the quality of the executing queries. Extensive experiments are performed to examine the effectiveness of several data update policies. © 2005 Elsevier B.V. All rights reserved.link_to_subscribed_fulltex

    Evaluating Probabilistic Queries over Imprecise Data

    No full text
    Many applications employ sensors for monitoring entities such as temperature and wind speed. A centralized database tracks these entities to enable query processing. Due to continuous changes in these values and limited resources (e.g., network bandwidth and battery power), it is often infeasible to store the exact values at all times. A similar situation exists for moving object environments that track the constantly changing locations of objects. In this environment, it is possible for database queries to produce incorrect or invalid results based upon old data. However, if the degree of error (or uncertainty) between the actual value and the database value is controlled, one can place more confidence in the answers to queries. More generally, query answers can be augmented with probabilistic estimates of the validity of the answers. In this paper we study probabilistic query evaluation based upon uncertain data. A classification of queries is made based upon the nature of the result set. For each class, we develop algorithms for computing probabilistic answers. We address the important issue of measuring the quality of the answers to these queries, and provide algorithms for efficiently pulling data from relevant sensors or moving objects in order to improve the quality of the executing queries. Extensive experiments are performed to examine the effectiveness of several data update policies.link_to_subscribed_fulltex

    Querying imprecise data in moving object environments

    No full text
    In moving: object environments, it is infeasible for the database tracking the movement of objects to store the exact locations of objects at all times. Typically, the location of an object Is known with certainty only at the time of the update. The uncertainty In its location increases until the next update. In this environment, it is possible for queries to produce Incorrect results based upon old data. However, if the degree of uncertainty Is controlled, then the error of the answers to queries can be reduced. More generally, query answers can be augmented with probabilistic estimates of the validity of the answer. In this paper, we study the execution of probabilistic range and nearest-neighbor queries. The imprecision in answers to queries is an inherent property of these applications due to uncertainty in data, unlike the techniques for approximate nearest-neighbor processing that trade accuracy for performance. Algorithms for computing these queries are presented for a generic object movement model and detailed solutions are discussed for two common models of uncertainty in moving object databases. We study the performance of these queries through extensive simulations.link_to_subscribed_fulltex

    Diversity in Similarity Joins

    No full text
    corecore