1,083 research outputs found

    Inheritance and complementation : a case study of easy adjectives and related nouns

    Get PDF
    Mechanisms for representing lexically the bulk of syntactic and semantic information for a language have been under active development, as is evident in the recent studies contained in this volume. Our study serves to highlight some of the most useful tools available for structured lexical representation, in particular, (multiple) inheritance, default specification, and lexical rules. It then illustrates the value of these mechanisms in illuminating one corner of the lexicon involving an unusual kind of complementation among a group of adjectives exemplified by easy. The virtures of the structured lexicon are its succinctness and its tendency to highlight significant clusters of linguistic properties. From its succinctness follow two practical advantages, namely its ease of maintenance and modifiability. In order to suggest how important these may be practically, we extend the analysis of adjectival complementation in several directions. These further illustrate how the use of inheritance in lexical representation permits exact and explicit characterizations of phenomena in the language under study. We demonstrate how the use of the mechanisms employed in the analysis of easy enable us to give a unified account of related phenomena featuring nouns like pleasure, and even the adverbs (adjectival specifiers) too and enough. Along the way we motivate some elaborations of the Head-Driven Phrase Structure Grammar (HPSG) framework in which we couch our analysis, and offer several avenues for further study of this part of the English lexicon

    Queries with Guarded Negation (full version)

    Full text link
    A well-established and fundamental insight in database theory is that negation (also known as complementation) tends to make queries difficult to process and difficult to reason about. Many basic problems are decidable and admit practical algorithms in the case of unions of conjunctive queries, but become difficult or even undecidable when queries are allowed to contain negation. Inspired by recent results in finite model theory, we consider a restricted form of negation, guarded negation. We introduce a fragment of SQL, called GN-SQL, as well as a fragment of Datalog with stratified negation, called GN-Datalog, that allow only guarded negation, and we show that these query languages are computationally well behaved, in terms of testing query containment, query evaluation, open-world query answering, and boundedness. GN-SQL and GN-Datalog subsume a number of well known query languages and constraint languages, such as unions of conjunctive queries, monadic Datalog, and frontier-guarded tgds. In addition, an analysis of standard benchmark workloads shows that most usage of negation in SQL in practice is guarded negation

    Possibilistic functional dependencies and their relationship to possibility theory

    Get PDF
    This paper introduces possibilistic functional dependencies. These dependencies are associated with a particular possibility distribution over possible worlds of a classical database. The possibility distribution reflects a layered view of the database. The highest layer of the (classical) database consists of those tuples that certainly belong to it, while the other layers add tuples that only possibly belong to the database, with different levels of possibility. The relation between the confidence levels associated with the tuples and the possibility distribution over possible database worlds is discussed in detail in the setting of possibility theory. A possibilistic functional dependency is a classical functional dependency associated with a certainty level that reflects the highest confidence level where the functional dependency no longer holds in the layered database. Moreover, the relationship between possibilistic functional dependencies and possibilistic logic formulas is established. Related work is reviewed, and the intended use of possibilistic functional dependencies is discussed in the conclusion

    A History-based Approach for Model Repair Recommendations in Software Engineering

    Get PDF
    Software is an everyday companion in today’s technology society that need to be evolved and maintained over long time periods. To manage the complexity of software projects, it has always been an effort to increase the level of abstraction during software development. Model-Driven Engineering (MDE) has shown to be a suitable method to raise abstraction levels during software development. Models are primary development artifacts in MDE that describe complex software systems from different viewpoints. In MDE software projects, models are heavily edited through all stages of the development process. During this editing process, the models can become inconsistent due to uncertainties in the software design or various editing mistakes. While most inconsistencies can be tolerated temporarily, they need to be resolved eventually. The resolution of an inconsistency affecting a model’s design is typically a creative process that requires a developer’s expertise. Model repair recommendation tools can guide the developer through this process and propose a ranked list of repairs to resolve the inconsistency. However, such tools will only be accepted in practice if the list of recommendations is plausible and understandable to a developer. Current approaches mainly focus on exhaustive search strategies to generate improved versions of an inconsistent model. Such resolutions might not be understandable to developers, may not reflect the original intentions of an editing process, or just undo former work. Moreover, those tools typically resolve multiple inconsistencies at a time, which might lead to an incomprehensible composition of repair proposals. This thesis proposes a history-based approach for model repair recommendations. The approach focuses on the detection and complementation of incomplete edit steps, which can be located in the editing history of a model. Edit steps are defined by consistency-preserving edit operations (CPEOs), which formally capture complex and error-prone modifications of a specific modeling language. A recognized incomplete edit step can either be undone or extended to a full execution of a CPEO. The final inconsistency resolution depends on the developer’s approval. The proposed recommendation approach is fully implemented and supported by our interactive repair tool called ReVision. The tool also includes configuration support to generate CPEOs by a semi-automated process. The approach is evaluated using histories of real-world models obtained from popular open-source modeling projects hosted in the Eclipse Git repository. Our experimental results confirm our hypothesis that most of the inconsistencies, namely 93.4%, can be resolved by complementing incomplete edits. 92.6% of the generated repair proposals are relevant in the sense that their effect can be observed in the models’ histories. 94.9% of the relevant repair proposals are ranked at the topmost position. Our empirical results show that the presented history-based model recommendation approach allows developers to repair model inconsistencies efficiently and effectively.Software ist in unserer heutigen Gesellschaft ein alltäglicher Begleiter. Diese wird ständig weiterentwickelt und überarbeitet. Model-Driven Engineering (MDE) hat sich als geeignete Methode erwiesen, um bei der Entwicklung komplexer Software von technischen Details zu abstrahieren. Hierbei werden Modelle als primäre Entwicklungsartefakte verwendet, welche ein Softwaresystem aus verschiedenen Sichten beschreiben. Modelle in MDE werden in allen Entwicklungsphasen einer Software fortlaufend überarbeitet. Während der Bearbeitung können die Modelle aufgrund von Unklarheiten im Design oder verschiedenen Bearbeitungsfehlern inkonsistent werden. Auch wenn Inkonsistenzen vorübergehend toleriert werden können, so müssen diese letztlich doch behoben werden. Die Behebung einer Inkonsistenz, welche sich auf das Design eines Modells auswirkt, ist meist ein kreativer Prozess, der das Fachwissen eines Entwicklers erfordert. Empfehlungswerkzeuge können den Entwickler mit Reparaturvorschlägen unterstützen. Damit solche Werkzeuge in der Praxis akzeptiert werden, müssen die Vorschläge plausible und nachvollziehbar sein. Die meisten aktuelle Ansätze verwenden Suchstrategien, welche Reparaturen durch systematisches Ausprobieren generieren. Die so generierten Reparaturen sind für Entwickler häufig schwer nachvollziehbar, da sie vorhergehende Bearbeitungsschritte nicht beachten oder diese einfach rückgängig machen. Darüber hinaus lösen diese Reparaturwerkzeuge in der Regel mehrere Inkonsistenzen gleichzeitig, was zu unverständlichen und umfangreichen Reparaturen führen kann. Diese Arbeit beschreibt einen Ansatz zum Erkennen und Ergänzen unvollständiger Bearbeitungsschritte, basierend auf der Bearbeitungshistorie eines Modells. Dazu werden konsistenzerhaltende Bearbeitungsoperationen definiert, die komplexe und fehleranfällige Änderungen einer bestimmten Modellierungssprache formal erfassen. Ein unvollständiger Bearbeitungsschritt kann dann entweder rückgängig gemacht oder zu einer konsistenzerhaltenden Bearbeitungsoperationen erweitert werden. Die endgültige Reparatur der Inkonsistenz hängt von der Einschätzung des Entwicklers ab. Der vorgeschlagene Ansatz wurde in unserem interaktiven Reparaturwerkzeug ReVision implementiert. Darüber hinaus umfasst das Werkzeug Unterstützung zum Generieren von konsistenzerhaltenden Bearbeitungsoperationen. Das Reparaturverfahren wurde anhand von Historien realer Modelle aus bekannten Open-Source-Modellierungsprojekten im Eclipse-Git-Repository bewertet. Die experimentellen Ergebnisse bestätigen unsere Hypothese, dass die meisten Inkonsistenzen, nämlich 93.4%, durch Ergänzung unvollständiger Bearbeitungen gelöst werden können. 92.6% der generierten Reparaturvorschläge könnten in der entsprechenden Modellhistorie beobachtet werden. Von diesen Reparaturvorschläge wurden 94.9% an erster Stelle vorgeschlagen. Unsere empirischen Ergebnisse zeigen, dass der vorgestellte historienbasierte Modellempfehlungsansatz es Entwicklern ermöglicht, Modellinkonsistenzen effizient und effektiv zu reparieren

    Lexical information from a minimalist point of view

    Get PDF
    Simplicity as a methodological orientation applies to linguistic theory just as to any other field of research: ‘Occam’s razor’ is the label for the basic heuristic maxim according to which an adequate analysis must ultimately be reduced to indispensible specifications. In this sense, conceptual economy has been a strict and stimulating guideline in the development of Generative Grammar from the very beginning. Halle’s (1959) argument discarding the level of taxonomic phonemics in order to unify two otherwise separate phonological processes is an early characteristic example; a more general notion is that of an evaluation metric introduced in Chomsky (1957, 1975), which relates the relative simplicity of alternative linguistic descriptions systematically to the quest for explanatory adequacy of the theory underlying the descriptions to be evaluated. Further proposals along these lines include the theory of markedness developed in Chomsky and Halle (1968), Kean (1975, 1981), and others, the notion of underspecification proposed e.g. in Archangeli (1984), Farkas (1990), the concept of default values and related notions. An important step promoting this general orientation was the idea of Principles and Parameters developed in Chomsky (1981, 1986), which reduced the notion of language particular rule systems to universal principles, subject merely to parametrization with restricted options, largely related to properties of particular lexical items. On this account, the notion of a simplicity metric is to be dispensed with, as competing analyses of relevant data are now supposed to be essentially excluded by the restrictive system of principles

    Neurosymbolic AI for Reasoning on Graph Structures: A Survey

    Full text link
    Neurosymbolic AI is an increasingly active area of research which aims to combine symbolic reasoning methods with deep learning to generate models with both high predictive performance and some degree of human-level comprehensibility. As knowledge graphs are becoming a popular way to represent heterogeneous and multi-relational data, methods for reasoning on graph structures have attempted to follow this neurosymbolic paradigm. Traditionally, such approaches have utilized either rule-based inference or generated representative numerical embeddings from which patterns could be extracted. However, several recent studies have attempted to bridge this dichotomy in ways that facilitate interpretability, maintain performance, and integrate expert knowledge. Within this article, we survey a breadth of methods that perform neurosymbolic reasoning tasks on graph structures. To better compare the various methods, we propose a novel taxonomy by which we can classify them. Specifically, we propose three major categories: (1) logically-informed embedding approaches, (2) embedding approaches with logical constraints, and (3) rule-learning approaches. Alongside the taxonomy, we provide a tabular overview of the approaches and links to their source code, if available, for more direct comparison. Finally, we discuss the applications on which these methods were primarily used and propose several prospective directions toward which this new field of research could evolve.Comment: 21 pages, 8 figures, 1 table, currently under review. Corresponding GitHub page here: https://github.com/NeSymGraph

    Conditional independence on semiring relations

    Full text link
    Conditional independence plays a foundational role in database theory, probability theory, information theory, and graphical models. In databases, conditional independence appears in database normalization and is known as the (embedded) multivalued dependency. Many properties of conditional independence are shared across various domains, and to some extent these commonalities can be studied through a measure-theoretic approach. The present paper proposes an alternative approach via semiring relations, defined by extending database relations with tuple annotations from some commutative semiring. Integrating various interpretations of conditional independence in this context, we investigate how the choice of the underlying semiring impacts the corresponding axiomatic and decomposition properties. We specifically identify positivity and multiplicative cancellativity as the key semiring properties that enable extending results from the relational context to the broader semiring framework. Additionally, we explore the relationships between different conditional independence notions through model theory, and consider how methods to test logical consequence and validity generalize from database theory and information theory to semiring relations

    Profiling relational data: a survey

    Get PDF
    Profiling data to determine metadata about a given dataset is an important and frequent activity of any IT professional and researcher and is necessary for various use-cases. It encompasses a vast array of methods to examine datasets and produce metadata. Among the simpler results are statistics, such as the number of null values and distinct values in a column, its data type, or the most frequent patterns of its data values. Metadata that are more difficult to compute involve multiple columns, namely correlations, unique column combinations, functional dependencies, and inclusion dependencies. Further techniques detect conditional properties of the dataset at hand. This survey provides a classification of data profiling tasks and comprehensively reviews the state of the art for each class. In addition, we review data profiling tools and systems from research and industry. We conclude with an outlook on the future of data profiling beyond traditional profiling tasks and beyond relational databases

    What Are Analytic Narratives?

    Get PDF
    The recently born expression "analytic narratives" refers to studies that have appeared at the boundaries of history, political science and economics. These studies purport to explain specific historical events by combining the usual narrative way of historians with the analytic tools that economists and political scientists find in rational choice theory. Game theory is prominent among these tools. The paper explains what analytic narratives are by sampling from the eponymous book Analytic Narratives by Bates, Greif, Levi, Rosenthal and Weingast and covering one outside study by Mongin (2008). It first evaluates the explanatory performance of the new genre, using some philosophy of historical explanation, and then checks its discursive consistency, using some narratology. The paper concludes that analytic narratives can usefully complement standard narratives in historical explanation, provided they specialize in the gaps that these narratives reveal, and that they are discursively consistent, despite the tension that combining a formal model with a narration creates. Two expository modes, called alternation and local supplementation, emerge from the discussion as the most appropriate ones to resolve this tension
    • …
    corecore