123,274 research outputs found

    A Type Language for Calendars

    Get PDF
    Time and calendars play an important role in databases, on the Semantic Web, as well as in mobile computing. Temporal data and calendars require (specific) modeling and processing tools. CaTTS is a type language for calendar definitions using which one can model and process temporal and calendric data. CaTTS is based on a "theory reasoning" approach for efficiency reasons. This article addresses type checking temporal and calendric data and constraints. A thesis underlying CaTTS is that types and type checking are as useful and desirable with calendric data types as with other data types. Types enable (meaningful) annotation of data. Type checking enhances efficiency and consistency of programming and modeling languages like database and Web query languages

    Set-Based Pre-Processing for Points-To Analysis

    Get PDF
    We present set-based pre-analysis: a virtually universal op- timization technique for flow-insensitive points-to analysis. Points-to analysis computes a static abstraction of how ob- ject values flow through a program’s variables. Set-based pre-analysis relies on the observation that much of this rea- soning can take place at the set level rather than the value level. Computing constraints at the set level results in sig- nificant optimization opportunities: we can rewrite the in- put program into a simplified form with the same essential points-to properties. This rewrite results in removing both local variables and instructions, thus simplifying the sub- sequent value-based points-to computation. E ectively, set- based pre-analysis puts the program in a normal form opti- mized for points-to analysis. Compared to other techniques for o -line optimization of points-to analyses in the literature, the new elements of our approach are the ability to eliminate statements, and not just variables, as well as its modularity: set-based pre-analysis can be performed on the input just once, e.g., allowing the pre-optimization of libraries that are subsequently reused many times and for di erent analyses. In experiments with Java programs, set-based pre-analysis eliminates 30% of the program’s local variables and 30% or more of computed context-sensitive points-to facts, over a wide set of bench- marks and analyses, resulting in a 20% average speedup (max: 110%, median: 18%)

    How to take into account general and contextual knowledge for interactive aiding design: Towards the coupling of CSP and CBR approaches

    Get PDF
    The goal of this paper is to show how it is possible to support design decisions with two different tools relying on two kinds of knowledge: case-based reasoning operating with contextual knowledge embodied in past cases and constraint filtering that operates with general knowledge formalized using constraints. Our goals are, firstly to make an overview of existing works that analyses the various ways to associate these two kinds of aiding tools essentially in a sequential way. Secondly, we propose an approach that allows us to use them simultaneously in order to assist design decisions with these two kinds of knowledge. The paper is organized as follows. In the first section, we define the goal of the paper and recall the background of case-based reasoning and constraint filtering. In the second section, the industrial problem which led us to consider these two kinds of knowledge is presented. In the third section, an overview of the various possibilities of using these two aiding decision tools in a sequential way is drawn up. In the fourth section, we propose an approach that allows us to use both aiding decision tools in a simultaneous and iterative way according to the availability of knowledge. An example dealing with helicopter maintenance illustrates our proposals

    Knowledge-based support in Non-Destructive Testing for health monitoring of aircraft structures

    Get PDF
    Maintenance manuals include general methods and procedures for industrial maintenance and they contain information about principles of maintenance methods. Particularly, Non-Destructive Testing (NDT) methods are important for the detection of aeronautical defects and they can be used for various kinds of material and in different environments. Conventional non-destructive evaluation inspections are done at periodic maintenance checks. Usually, the list of tools used in a maintenance program is simply located in the introduction of manuals, without any precision as regards to their characteristics, except for a short description of the manufacturer and tasks in which they are employed. Improving the identification concepts of the maintenance tools is needed to manage the set of equipments and establish a system of equivalence: it is necessary to have a consistent maintenance conceptualization, flexible enough to fit all current equipment, but also all those likely to be added/used in the future. Our contribution is related to the formal specification of the system of functional equivalences that can facilitate the maintenance activities with means to determine whether a tool can be substituted for another by observing their key parameters in the identified characteristics. Reasoning mechanisms of conceptual graphs constitute the baseline elements to measure the fit or unfit between an equipment model and a maintenance activity model. Graph operations are used for processing answers to a query and this graph-based approach to the search method is in-line with the logical view of information retrieval. The methodology described supports knowledge formalization and capitalization of experienced NDT practitioners. As a result, it enables the selection of a NDT technique and outlines its capabilities with acceptable alternatives

    Semantic Ambiguity and Perceived Ambiguity

    Full text link
    I explore some of the issues that arise when trying to establish a connection between the underspecification hypothesis pursued in the NLP literature and work on ambiguity in semantics and in the psychological literature. A theory of underspecification is developed `from the first principles', i.e., starting from a definition of what it means for a sentence to be semantically ambiguous and from what we know about the way humans deal with ambiguity. An underspecified language is specified as the translation language of a grammar covering sentences that display three classes of semantic ambiguity: lexical ambiguity, scopal ambiguity, and referential ambiguity. The expressions of this language denote sets of senses. A formalization of defeasible reasoning with underspecified representations is presented, based on Default Logic. Some issues to be confronted by such a formalization are discussed.Comment: Latex, 47 pages. Uses tree-dvips.sty, lingmacros.sty, fullname.st

    Robust Processing of Natural Language

    Full text link
    Previous approaches to robustness in natural language processing usually treat deviant input by relaxing grammatical constraints whenever a successful analysis cannot be provided by ``normal'' means. This schema implies, that error detection always comes prior to error handling, a behaviour which hardly can compete with its human model, where many erroneous situations are treated without even noticing them. The paper analyses the necessary preconditions for achieving a higher degree of robustness in natural language processing and suggests a quite different approach based on a procedure for structural disambiguation. It not only offers the possibility to cope with robustness issues in a more natural way but eventually might be suited to accommodate quite different aspects of robust behaviour within a single framework.Comment: 16 pages, LaTeX, uses pstricks.sty, pstricks.tex, pstricks.pro, pst-node.sty, pst-node.tex, pst-node.pro. To appear in: Proc. KI-95, 19th German Conference on Artificial Intelligence, Bielefeld (Germany), Lecture Notes in Computer Science, Springer 199

    Designing Software Architectures As a Composition of Specializations of Knowledge Domains

    Get PDF
    This paper summarizes our experimental research and software development activities in designing robust, adaptable and reusable software architectures. Several years ago, based on our previous experiences in object-oriented software development, we made the following assumption: ‘A software architecture should be a composition of specializations of knowledge domains’. To verify this assumption we carried out three pilot projects. In addition to the application of some popular domain analysis techniques such as use cases, we identified the invariant compositional structures of the software architectures and the related knowledge domains. Knowledge domains define the boundaries of the adaptability and reusability capabilities of software systems. Next, knowledge domains were mapped to object-oriented concepts. We experienced that some aspects of knowledge could not be directly modeled in terms of object-oriented concepts. In this paper we describe our approach, the pilot projects, the experienced problems and the adopted solutions for realizing the software architectures. We conclude the paper with the lessons that we learned from this experience

    TDL--- A Type Description Language for Constraint-Based Grammars

    Full text link
    This paper presents \tdl, a typed feature-based representation language and inference system. Type definitions in \tdl\ consist of type and feature constraints over the boolean connectives. \tdl\ supports open- and closed-world reasoning over types and allows for partitions and incompatible types. Working with partially as well as with fully expanded types is possible. Efficient reasoning in \tdl\ is accomplished through specialized modules.Comment: Will Appear in Proc. COLING-9

    Controlled Natural Language Processing as Answer Set Programming: an Experiment

    Full text link
    Most controlled natural languages (CNLs) are processed with the help of a pipeline architecture that relies on different software components. We investigate in this paper in an experimental way how well answer set programming (ASP) is suited as a unifying framework for parsing a CNL, deriving a formal representation for the resulting syntax trees, and for reasoning with that representation. We start from a list of input tokens in ASP notation and show how this input can be transformed into a syntax tree using an ASP grammar and then into reified ASP rules in form of a set of facts. These facts are then processed by an ASP meta-interpreter that allows us to infer new knowledge

    A Reasoner for Calendric and Temporal Data

    Get PDF
    Calendric and temporal data are omnipresent in countless Web and Semantic Web applications and Web services. Calendric and temporal data are probably more than any other data a subject to interpretation, in almost any case depending on some cultural, legal, professional, and/or locational context. On the current Web, calendric and temporal data can hardly be interpreted by computers. This article contributes to the Semantic Web, an endeavor aiming at enhancing the current Web with well-defined meaning and to enable computers to meaningfully process data. The contribution is a reasoner for calendric and temporal data. This reasoner is part of CaTTS, a type language for calendar definitions. The reasoner is based on a \theory reasoning" approach using constraint solving techniques. This reasoner complements general purpose \axiomatic reasoning" approaches for the Semantic Web as widely used with ontology languages like OWL or RDF
    corecore