1,053 research outputs found

    Bounded Rationality and Heuristics in Humans and in Artificial Cognitive Systems

    Get PDF
    In this paper I will present an analysis of the impact that the notion of “bounded rationality”, introduced by Herbert Simon in his book “Administrative Behavior”, produced in the field of Artificial Intelligence (AI). In particular, by focusing on the field of Automated Decision Making (ADM), I will show how the introduction of the cognitive dimension into the study of choice of a rational (natural) agent, indirectly determined - in the AI field - the development of a line of research aiming at the realisation of artificial systems whose decisions are based on the adoption of powerful shortcut strategies (known as heuristics) based on “satisficing” - i.e. non optimal - solutions to problem solving. I will show how the “heuristic approach” to problem solving allowed, in AI, to face problems of combinatorial complexity in real-life situations and still represents an important strategy for the design and implementation of intelligent systems

    Expressive probabilistic description logics

    Get PDF
    AbstractThe work in this paper is directed towards sophisticated formalisms for reasoning under probabilistic uncertainty in ontologies in the Semantic Web. Ontologies play a central role in the development of the Semantic Web, since they provide a precise definition of shared terms in web resources. They are expressed in the standardized web ontology language OWL, which consists of the three increasingly expressive sublanguages OWL Lite, OWL DL, and OWL Full. The sublanguages OWL Lite and OWL DL have a formal semantics and a reasoning support through a mapping to the expressive description logics SHIF(D) and SHOIN(D), respectively. In this paper, we present the expressive probabilistic description logics P-SHIF(D) and P-SHOIN(D), which are probabilistic extensions of these description logics. They allow for expressing rich terminological probabilistic knowledge about concepts and roles as well as assertional probabilistic knowledge about instances of concepts and roles. They are semantically based on the notion of probabilistic lexicographic entailment from probabilistic default reasoning, which naturally interprets this terminological and assertional probabilistic knowledge as knowledge about random and concrete instances, respectively. As an important additional feature, they also allow for expressing terminological default knowledge, which is semantically interpreted as in Lehmann's lexicographic entailment in default reasoning from conditional knowledge bases. Another important feature of this extension of SHIF(D) and SHOIN(D) by probabilistic uncertainty is that it can be applied to other classical description logics as well. We then present sound and complete algorithms for the main reasoning problems in the new probabilistic description logics, which are based on reductions to reasoning in their classical counterparts, and to solving linear optimization problems. In particular, this shows the important result that reasoning in the new probabilistic description logics is decidable/computable. Furthermore, we also analyze the computational complexity of the main reasoning problems in the new probabilistic description logics in the general as well as restricted cases

    Converting Instance Checking to Subsumption: A Rethink for Object Queries over Practical Ontologies

    Full text link
    Efficiently querying Description Logic (DL) ontologies is becoming a vital task in various data-intensive DL applications. Considered as a basic service for answering object queries over DL ontologies, instance checking can be realized by using the most specific concept (MSC) method, which converts instance checking into subsumption problems. This method, however, loses its simplicity and efficiency when applied to large and complex ontologies, as it tends to generate very large MSC's that could lead to intractable reasoning. In this paper, we propose a revision to this MSC method for DL SHI, allowing it to generate much simpler and smaller concepts that are specific-enough to answer a given query. With independence between computed MSC's, scalability for query answering can also be achieved by distributing and parallelizing the computations. An empirical evaluation shows the efficacy of our revised MSC method and the significant efficiency achieved when using it for answering object queries

    Progress Report : 1991 - 1994

    Get PDF

    Incorporating generalized quantifiers into description logic for representing data source contents

    Get PDF
    Title from cover. "January 1998."Includes bibliographical references (p. 19-21).Steven Yi-cheng Tu and Stuart E. Madnick

    A Survey of Volunteered Open Geo-Knowledge Bases in the Semantic Web

    Full text link
    Over the past decade, rapid advances in web technologies, coupled with innovative models of spatial data collection and consumption, have generated a robust growth in geo-referenced information, resulting in spatial information overload. Increasing 'geographic intelligence' in traditional text-based information retrieval has become a prominent approach to respond to this issue and to fulfill users' spatial information needs. Numerous efforts in the Semantic Geospatial Web, Volunteered Geographic Information (VGI), and the Linking Open Data initiative have converged in a constellation of open knowledge bases, freely available online. In this article, we survey these open knowledge bases, focusing on their geospatial dimension. Particular attention is devoted to the crucial issue of the quality of geo-knowledge bases, as well as of crowdsourced data. A new knowledge base, the OpenStreetMap Semantic Network, is outlined as our contribution to this area. Research directions in information integration and Geographic Information Retrieval (GIR) are then reviewed, with a critical discussion of their current limitations and future prospects

    Abductive speech act recognition, corporate agents and the COSMA system

    Get PDF
    This chapter presents an overview of the DISCO project\u27s solutions to several problems in natural language pragmatics. Its central focus is on relating utterances to intentions through speech act recognition. Subproblems include the incorporation of linguistic cues into the speech act recognition process, precise and efficient multiagent belief attribution models (Corporate Agents), and speech act representation and processing using Corporate Agents. These ideas are being tested within the COSMA appointment scheduling system, one application of the DISCO natural language interface. Abductive speech act processing in this environment is not far from realizing its potential for fully bidirectional implementation

    A Lightweight Defeasible Description Logic in Depth: Quantification in Rational Reasoning and Beyond

    Get PDF
    Description Logics (DLs) are increasingly successful knowledge representation formalisms, useful for any application requiring implicit derivation of knowledge from explicitly known facts. A prominent example domain benefiting from these formalisms since the 1990s is the biomedical field. This area contributes an intangible amount of facts and relations between low- and high-level concepts such as the constitution of cells or interactions between studied illnesses, their symptoms and remedies. DLs are well-suited for handling large formal knowledge repositories and computing inferable coherences throughout such data, relying on their well-founded first-order semantics. In particular, DLs of reduced expressivity have proven a tremendous worth for handling large ontologies due to their computational tractability. In spite of these assets and prevailing influence, classical DLs are not well-suited to adequately model some of the most intuitive forms of reasoning. The capability for abductive reasoning is imperative for any field subjected to incomplete knowledge and the motivation to complete it with typical expectations. When such default expectations receive contradicting evidence, an abductive formalism is able to retract previously drawn, conflicting conclusions. Common examples often include human reasoning or a default characterisation of properties in biology, such as the normal arrangement of organs in the human body. Treatment of such defeasible knowledge must be aware of exceptional cases - such as a human suffering from the congenital condition situs inversus - and therefore accommodate for the ability to retract defeasible conclusions in a non-monotonic fashion. Specifically tailored non-monotonic semantics have been continuously investigated for DLs in the past 30 years. A particularly promising approach, is rooted in the research by Kraus, Lehmann and Magidor for preferential (propositional) logics and Rational Closure (RC). The biggest advantages of RC are its well-behaviour in terms of formal inference postulates and the efficient computation of defeasible entailments, by relying on a tractable reduction to classical reasoning in the underlying formalism. A major contribution of this work is a reorganisation of the core of this reasoning method, into an abstract framework formalisation. This framework is then easily instantiated to provide the reduction method for RC in DLs as well as more advanced closure operators, such as Relevant or Lexicographic Closure. In spite of their practical aptitude, we discovered that all reduction approaches fail to provide any defeasible conclusions for elements that only occur in the relational neighbourhood of the inspected elements. More explicitly, a distinguishing advantage of DLs over propositional logic is the capability to model binary relations and describe aspects of a related concept in terms of existential and universal quantification. Previous approaches to RC (and more advanced closures) are not able to derive typical behaviour for the concepts that occur within such quantification. The main contribution of this work is to introduce stronger semantics for the lightweight DL EL_bot with the capability to infer the expected entailments, while maintaining a close relation to the reduction method. We achieve this by introducing a new kind of first-order interpretation that allocates defeasible information on its elements directly. This allows to compare the level of typicality of such interpretations in terms of defeasible information satisfied at elements in the relational neighbourhood. A typicality preference relation then provides the means to single out those sets of models with maximal typicality. Based on this notion, we introduce two types of nested rational semantics, a sceptical and a selective variant, each capable of deriving the missing entailments under RC for arbitrarily nested quantified concepts. As a proof of versatility for our new semantics, we also show that the stronger Relevant Closure, can be imbued with typical information in the successors of binary relations. An extensive investigation into the computational complexity of our new semantics shows that the sceptical nested variant comes at considerable additional effort, while the selective semantics reside in the complexity of classical reasoning in the underlying DL, which remains tractable in our case

    Dwelling on ontology - semantic reasoning over topographic maps

    Get PDF
    The thesis builds upon the hypothesis that the spatial arrangement of topographic features, such as buildings, roads and other land cover parcels, indicates how land is used. The aim is to make this kind of high-level semantic information explicit within topographic data. There is an increasing need to share and use data for a wider range of purposes, and to make data more definitive, intelligent and accessible. Unfortunately, we still encounter a gap between low-level data representations and high-level concepts that typify human qualitative spatial reasoning. The thesis adopts an ontological approach to bridge this gap and to derive functional information by using standard reasoning mechanisms offered by logic-based knowledge representation formalisms. It formulates a framework for the processes involved in interpreting land use information from topographic maps. Land use is a high-level abstract concept, but it is also an observable fact intimately tied to geography. By decomposing this relationship, the thesis correlates a one-to-one mapping between high-level conceptualisations established from human knowledge and real world entities represented in the data. Based on a middle-out approach, it develops a conceptual model that incrementally links different levels of detail, and thereby derives coarser, more meaningful descriptions from more detailed ones. The thesis verifies its proposed ideas by implementing an ontology describing the land use ‘residential area’ in the ontology editor ProtĂ©gĂ©. By asserting knowledge about high-level concepts such as types of dwellings, urban blocks and residential districts as well as individuals that link directly to topographic features stored in the database, the reasoner successfully infers instances of the defined classes. Despite current technological limitations, ontologies are a promising way forward in the manner we handle and integrate geographic data, especially with respect to how humans conceptualise geographic space
    • 

    corecore