6,417 research outputs found

    Feature-based and Model-based Semantics for English, French and German Verb Phrases

    Get PDF
    This paper considers the relative merits of using features and formal event models to characterise the semantics of English, French and German verb phrases, and con- siders the application of such semantics in machine translation. The feature-based ap- proach represents the semantics in terms of feature systems, which have been widely used in computational linguistics for representing complex syntactic structures. The paper shows how a simple intuitive semantics of verb phrases may be encoded as a feature system, and how this can be used to support modular construction of au- tomatic translation systems through feature look-up tables. This is illustrated by automated translation of English into either French or German. The paper contin- ues to formalise the feature-based approach via a model-based, Montague semantics, which extends previous work on the semantics of English verb phrases. In so doing, repercussions of and to this framework in conducting a contrastive semantic study are considered. The model-based approach also promises to provide support for a more sophisticated approach to translation through logical proof; the paper indicates further work required for the fulfilment of this promise

    First-order Goedel logics

    Full text link
    First-order Goedel logics are a family of infinite-valued logics where the sets of truth values V are closed subsets of [0, 1] containing both 0 and 1. Different such sets V in general determine different Goedel logics G_V (sets of those formulas which evaluate to 1 in every interpretation into V). It is shown that G_V is axiomatizable iff V is finite, V is uncountable with 0 isolated in V, or every neighborhood of 0 in V is uncountable. Complete axiomatizations for each of these cases are given. The r.e. prenex, negation-free, and existential fragments of all first-order Goedel logics are also characterized.Comment: 37 page

    Cooperative answers in database systems

    Get PDF
    A major concern of researchers who seek to improve human-computer communication involves how to move beyond literal interpretations of queries to a level of responsiveness that takes the user's misconceptions, expectations, desires, and interests into consideration. At Maryland, we are investigating how to better meet a user's needs within the framework of the cooperative answering system of Gal and Minker. We have been exploring how to use semantic information about the database to formulate coherent and informative answers. The work has two main thrusts: (1) the construction of a logic formula which embodies the content of a cooperative answer; and (2) the presentation of the logic formula to the user in a natural language form. The information that is available in a deductive database system for building cooperative answers includes integrity constraints, user constraints, the search tree for answers to the query, and false presuppositions that are present in the query. The basic cooperative answering theory of Gal and Minker forms the foundation of a cooperative answering system that integrates the new construction and presentation methods. This paper provides an overview of the cooperative answering strategies used in the CARMIN cooperative answering system, an ongoing research effort at Maryland. Section 2 gives some useful background definitions. Section 3 describes techniques for collecting cooperative logical formulae. Section 4 discusses which natural language generation techniques are useful for presenting the logic formula in natural language text. Section 5 presents a diagram of the system

    An interval logic for higher-level temporal reasoning

    Get PDF
    Prior work explored temporal logics, based on classical modal logics, as a framework for specifying and reasoning about concurrent programs, distributed systems, and communications protocols, and reported on efforts using temporal reasoning primitives to express very high level abstract requirements that a program or system is to satisfy. Based on experience with those primitives, this report describes an Interval Logic that is more suitable for expressing such higher level temporal properties. The report provides a formal semantics for the Interval Logic, and several examples of its use. A description of decision procedures for the logic is also included

    Using temporal abduction for biosignal interpretation: A case study on QRS detection

    Full text link
    In this work, we propose an abductive framework for biosignal interpretation, based on the concept of Temporal Abstraction Patterns. A temporal abstraction pattern defines an abstraction relation between an observation hypothesis and a set of observations constituting its evidence support. New observations are generated abductively from any subset of the evidence of a pattern, building an abstraction hierarchy of observations in which higher levels contain those observations with greater interpretative value of the physiological processes underlying a given signal. Non-monotonic reasoning techniques have been applied to this model in order to find the best interpretation of a set of initial observations, permitting even to correct these observations by removing, adding or modifying them in order to make them consistent with the available domain knowledge. Some preliminary experiments have been conducted to apply this framework to a well known and bounded problem: the QRS detection on ECG signals. The objective is not to provide a new better QRS detector, but to test the validity of an abductive paradigm. These experiments show that a knowledge base comprising just a few very simple rhythm abstraction patterns can enhance the results of a state of the art algorithm by significantly improving its detection F1-score, besides proving the ability of the abductive framework to correct both sensitivity and specificity failures.Comment: 7 pages, Healthcare Informatics (ICHI), 2014 IEEE International Conference o

    Learning Tuple Probabilities

    Get PDF
    Learning the parameters of complex probabilistic-relational models from labeled training data is a standard technique in machine learning, which has been intensively studied in the subfield of Statistical Relational Learning (SRL), but---so far---this is still an under-investigated topic in the context of Probabilistic Databases (PDBs). In this paper, we focus on learning the probability values of base tuples in a PDB from labeled lineage formulas. The resulting learning problem can be viewed as the inverse problem to confidence computations in PDBs: given a set of labeled query answers, learn the probability values of the base tuples, such that the marginal probabilities of the query answers again yield in the assigned probability labels. We analyze the learning problem from a theoretical perspective, cast it into an optimization problem, and provide an algorithm based on stochastic gradient descent. Finally, we conclude by an experimental evaluation on three real-world and one synthetic dataset, thus comparing our approach to various techniques from SRL, reasoning in information extraction, and optimization

    An Integrated First-Order Theory of Points and Intervals over Linear Orders (Part I)

    Get PDF
    There are two natural and well-studied approaches to temporal ontology and reasoning: point-based and interval-based. Usually, interval-based temporal reasoning deals with points as a particular case of duration-less intervals. A recent result by Balbiani, Goranko, and Sciavicco presented an explicit two-sorted point-interval temporal framework in which time instants (points) and time periods (intervals) are considered on a par, allowing the perspective to shift between these within the formal discourse. We consider here two-sorted first-order languages based on the same principle, and therefore including relations, as first studied by Reich, among others, between points, between intervals, and inter-sort. We give complete classifications of its sub-languages in terms of relative expressive power, thus determining how many, and which, are the intrinsically different extensions of two-sorted first-order logic with one or more such relations. This approach roots out the classical problem of whether or not points should be included in a interval-based semantics

    Abductive and Consistency-Based Diagnosis Revisited: a Modeling Perspective

    Full text link
    Diagnostic reasoning has been characterized logically as consistency-based reasoning or abductive reasoning. Previous analyses in the literature have shown, on the one hand, that choosing the (in general more restrictive) abductive definition may be appropriate or not, depending on the content of the knowledge base [Console&Torasso91], and, on the other hand, that, depending on the choice of the definition the same knowledge should be expressed in different form [Poole94]. Since in Model-Based Diagnosis a major problem is finding the right way of abstracting the behavior of the system to be modeled, this paper discusses the relation between modeling, and in particular abstraction in the model, and the notion of diagnosis.Comment: 5 pages, 8th Int. Workshop on Nonmonotonic Reasoning, 200
    • …
    corecore