11 research outputs found

    Intelligent query answering in rule based systems

    Get PDF
    AbstractWe propose that in large knowledge bases which are collections of atomic facts and general rules (Horn clauses), the rules should be allowed to occur in the answer for a query. We introduce a new concept of the answer for a query which includes both atomic facts and general rules. We provide a method of transforming rules by relational algebra expressions built from projection, join, and selection and demonstrate how the answers consisting of both facts and general rules can be generated

    Conditional Answer Computation in SOL as Speculative Computation in Multi-Agent Environments1 1This research was supported partly by Grant-in-Aid from The Ministry of Education, Science and Culture of Japan.

    Get PDF
    AbstractIn this paper, we study speculative computation in a master-slave multi-agent system where reply messages sent from slave agents to a master are always tentative and may change from time to time. In this system, default values used in speculative computation are only partially determined in advance. Inoue et al. [8] formalized speculative computation in such an environment with tentative replies, using the framework of a first-order consequence-finding procedure SOL with the well-known answer literal method. We shall further refine the SOL calculus, using conditional answer computation and skip-preference in SOL. The conditional answer format has an great advantage of explicitly representing how a conclusion depends on tentative replies and defaults, both of which are used to derive the conclusion. The dependency representation is significantly important to avoid unnecessary recomputation of tentative conclusions. The skip-preference has the great ability of preventing irrational/redundant derivations

    Context interchange : new features and formalisms for the intelligent integration of information

    Get PDF
    Cover title.Includes bibliographical references (p. 22-24).Supported in part by the National Financial Services Research Center (IFSRC), the PROductivity From Information Technology (PROFIT) project at MIT, ARPA, and UASF/Rome Laboratory. F30602-93-C-0160Cheng Hian Goh ... [et al.]

    Partial Computation in Real-Time Database Systems: A Research Plan

    Get PDF
    State-of-the-art database management systems are inappropriate for real-time applications due to their lack of speed and predictability of response. To combat these problems, the scheduler needs to be able to take advantage of the vast quantity of semantic and timing information that is typically available in such systems. Furthermore, to improve predictability of response, the system should be capable of providing a partial, but correct, response in a timely manner. We therefore propose to develop a semantics for real-time database systems that incorporates temporal knowledge of data-objects, their validity, and computation using their values. This temporal knowledge should include not just historical information but future knowledge of when to expect values to appear. This semantics will be used to develop a notion of approximate or partial computation, and to develop schedulers appropriate for real-time transactions

    On abduction and answer generation through constrained resolution

    Get PDF
    Recently, extensions of constrained logic programming and constrained resolution for theorem proving have been introduced, that consider constraints, which are interpreted under an open world assumption. We discuss relationships between applications of these approaches for query answering in knowledge base systems on the one hand and abduction-based hypothetical reasoning on the other hand. We show both that constrained resolution can be used as an operationalization of (some limited form of) abduction and that abduction is the logical status of an answer generation process through constrained resolution, ie., it is an abductive but not a deductive form of reasoning

    Perspectives in deductive databases

    Get PDF
    AbstractI discuss my experiences, some of the work that I have done, and related work that influenced me, concerning deductive databases, over the last 30 years. I divide this time period into three roughly equal parts: 1957–1968, 1969–1978, 1979–present. For the first I describe how my interest started in deductive databases in 1957, at a time when the field of databases did not even exist. I describe work in the beginning years, leading to the start of deductive databases about 1968 with the work of Cordell Green and Bertram Raphael. The second period saw a great deal of work in theorem providing as well as the introduction of logic programming. The existence and importance of deductive databases as a formal and viable discipline received its impetus at a workshop held in Toulouse, France, in 1977, which culminated in the book Logic and Data Bases. The relationship of deductive databases and logic programming was recognized at that time. During the third period we have seen formal theories of databases come about as an outgrowth of that work, and the recognition that artificial intelligence and deductive databases are closely related, at least through the so-called expert database systems. I expect that the relationships between techniques from formal logic, databases, logic programming, and artificial intelligence will continue to be explored and the field of deductive databases will become a more prominent area of computer science in coming years

    COOPERATIVE QUERY ANSWERING FOR APPROXIMATE ANSWERS WITH NEARNESS MEASURE IN HIERARCHICAL STRUCTURE INFORMATION SYSTEMS

    Get PDF
    Cooperative query answering for approximate answers has been utilized in various problem domains. Many challenges in manufacturing information retrieval, such as: classifying parts into families in group technology implementation, choosing the closest alternatives or substitutions for an out-of-stock part, or finding similar existing parts for rapid prototyping, could be alleviated using the concept of cooperative query answering. Most cooperative query answering techniques proposed by researchers so far concentrate on simple queries or single table information retrieval. Query relaxations in searching for approximate answers are mostly limited to attribute value substitutions. Many hierarchical structure information systems, such as manufacturing information systems, store their data in multiple tables that are connected to each other using hierarchical relationships - "aggregation", "generalization/specialization", "classification", and "category". Due to the nature of hierarchical structure information systems, information retrieval in such domains usually involves nested or jointed queries. In addition, searching for approximate answers in hierarchical structure databases not only considers attribute value substitutions, but also must take into account attribute or relation substitutions (i.e., WIDTH to DIAMETER, HOLE to GROOVE). For example, shape transformations of parts or features are possible and commonly practiced. A bar could be transformed to a rod. Such characteristics of hierarchical information systems, simple query or single-relation query relaxation techniques used in most cooperative query answering systems are not adequate. In this research, we proposed techniques for neighbor knowledge constructions, and complex query relaxations. We enhanced the original Pattern-based Knowledge Induction (PKI) and Distribution Sensitive Clustering (DISC) so that they can be used in neighbor hierarchy constructions at both tuple and attribute levels. We developed a cooperative query answering model to facilitate the approximate answer searching for complex queries. Our cooperative query answering model is comprised of algorithms for determining the causes of null answer, expanding qualified tuple set, expanding intersected tuple set, and relaxing multiple condition simultaneously. To calculate the semantic nearness between exact-match answers and approximate answers, we also proposed a nearness measuring function, called "Block Nearness", that is appropriate for the query relaxation methods proposed in this research

    Intensional Query Processing in Deductive Database Systems.

    Get PDF
    This dissertation addresses the problem of deriving a set of non-ground first-order logic formulas (intensional answers), as an answer set to a given query, rather than a set of facts (extensional answers), in deductive database (DDB) systems based on non-recursive Horn clauses. A strategy in previous work in this area is to use resolution to derive intensional answers. It leaves however, several important problems. Some of them are: no specific resolution strategy is given; no specific methodologies to formalize the meaningful intensional answers are given; no solution is given to handle large facts in extensional databases (EDB); and no strategy is given to avoid deriving meaningless intensional answers. As a solution, a three-stage formalization process (pre-resolution, resolution, and post-resolution) for the derivation of meaningful intensional answers is proposed which can solve all of the problems mentioned above. A specific resolution strategy called SLD-RC resolution is proposed, which can derive a set of meaningful intensional answers. The notions of relevant literals and relevant clauses are introduced to avoid deriving meaningless intensional answers. The soundness and the completeness of SLD-RC resolution for intensional query processing are proved. An algorithm for the three-stage formalization process is presented and the correctness of the algorithm is proved. Furthermore, it is shown that there are two relationships between intensional answers and extensional answers. In a syntactic relationship, intensional answers are sufficient conditions to derive extensional answers. In a semantic relationship, intensional answers are sufficient and necessary conditions to derive extensional answers. Based on these relationships, the notions of the global and local completeness of an intensional database (IDB) are defined. It is proved that all incomplete IDBs can be transformed into globally complete IDBs, in which all extensional answers can be generated by evaluating intensional answers against an EDB. We claim that the intensional query processing provide a new methodology for query processing in DDBs and thus, extending the categories of queries, will greatly increase our insight into the nature of DDBs

    Effective information integration and reutilization : solutions to technological deficiency and legal uncertainty

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology, Management, and Policy Program, February 2006."September 2005."Includes bibliographical references (p. 141-148).The amount of electronically accessible information has been growing exponentially. How to effectively use this information has become a significant challenge. A post 9/11 study indicated that the deficiency of semantic interoperability technology hindered the ability to integrate information from disparate sources in a meaningful and timely fashion to allow for preventive precautions. Meanwhile, organizations that provided useful services by combining and reusing information from publicly accessible sources have been legally challenged. The Database Directive has been introduced in the European Union and six legislative proposals have been made in the U.S. to provide legal protection for non-copyrightable database contents, but the Directive and the proposals have differing and sometimes conflicting scope and strength, which creates legal uncertainty for valued-added data reuse practices. The need for clearer data reuse policy will become more acute as information integration technology improves to make integration much easier. This Thesis takes an interdisciplinary approach to addressing both the technology and the policy challenges, identified above, in the effective use and reuse of information from disparate sources.(cont.) The technology component builds upon the existing Context Interchange (COIN) framework for large-scale semantic interoperability. We focus on the problem of temporal semantic heterogeneity where data sources and receivers make time-varying assumptions about data semantics. A collection of time-varying assumptions are called a temporal context. We extend the existing COIN representation formalism to explicitly represent temporal contexts, and the COIN reasoning mechanism to reconcile temporal semantic heterogeneity in the presence of semantic heterogeneity of time. We also perform a systematic and analytic evaluation of the flexibility and scalability of the COIN approach. Compared with several traditional approaches, the COIN approach has much greater flexibility and scalability. For the policy component, we develop an economic model that formalizes the policy instruments in one of the latest legislative proposals in the U.S. The model allows us to identify the circumstances under which legal protection for non-copyrightable content is needed, the different conditions, and the corresponding policy choices.(cont.) Our analysis indicates that depending on the cost level of database creation, the degree of differentiation of the reuser database, and the efficiency of policy administration, the optimal policy choice can be protecting a legal monopoly, encouraging competition via compulsory licensing, discouraging voluntary licensing, or even allowing free riding. The results provide useful insights for the formulation of a socially beneficial database protection policy.by Hongwei Zhu.Ph.D
    corecore