40 research outputs found
Research on knowledge representation, machine learning, and knowledge acquisition
Research in knowledge representation, machine learning, and knowledge acquisition performed at Knowledge Systems Lab. is summarized. The major goal of the research was to develop flexible, effective methods for representing the qualitative knowledge necessary for solving large problems that require symbolic reasoning as well as numerical computation. The research focused on integrating different representation methods to describe different kinds of knowledge more effectively than any one method can alone. In particular, emphasis was placed on representing and using spatial information about three dimensional objects and constraints on the arrangement of these objects in space. Another major theme is the development of robust machine learning programs that can be integrated with a variety of intelligent systems. To achieve this goal, learning methods were designed, implemented and experimented within several different problem solving environments
Knowledge-Based Support for Management of Concurrent, Multidisciplinary Design
Artificial intelligence (AI) applications to design have tended to focus on modeling and automating aspects of single discipline design tasks. Relatively little attention has thus far been devoted to representing the kinds of design \u27metaknowledge\u27 needed to manage the important interface issues that arise in concurrent design, that is, multidisciplinary design decision-making. This paper provides a view of the process and management of concurrent design and evaluates the potential of two AI approaches—blackboard architectures and co-operative distributed problem-solving (CDPS)—to model and support the concurrent design of complex artifacts. A discussion of the process of multidisciplinary design highlights elements of both sequential and concurrent design decision-making. We identify several kinds of design metaknowledge used by expert managers to: partition the design task for efficient execution by specialists; set appropriate levels of design conservatism for key subsystem specifications; evaluate, limit and selectively communicate design changes across discipline boundaries; and control the sequence and timing of the key (highly constrained and constraining) design decisions for a given type of artifact. We explore the extent to which blackboard and CDPS architectures can provide valid models of and potential decision support for concurrent design by (1) representing design management metaknowledge, and (2) using it to enhance both horizontal (interdisciplinary) and vertical (project life cycle) integration among product design, manufacturing and operations specialists
Opportunistic Reasoning for the Semantic Web: Adapting Reasoning to the Environment
Despite the efforts devoted so far, the Semantic Web vision appears to be an eluding target. We propose a paradigm shift for the Semantic Web centred around the pragmatics of developing Semantic Web applications in order to overcome the bootstrapping problem it suffers from. This paradigm is based on the vision of the Semantic Web as the result emerging from the integration and collaboration of a plethora of Semantic Web applications, rather that as a global entity. On the basis of this assumption we describe and propose Opportunistic Reasoning as a general purpose reasoning model suitable for the development of reasonably scalable Semantic Web applications
The Architecture of a Cooperative Respondent
If natural language question-answering (NLQA) systems are to be truly effective and useful, they must respond to queries cooperatively, recognizing and accommodating in their replies a questioner\u27s goals, plans, and needs. Transcripts of natural dialogue demonstrate that cooperative responses typically combine several communicative acts: a question may be answered, a misconception identified, an alternative course of action described and justified. This project concerns the design of cooperative response generation systems, NLQA systems that are able to provide integrated cooperative responses.
Two questions must be answered before a cooperative NLQA system can be built. First, what are the reasoning mechanisms that underlie cooperative response generation? In partial reply, I argue that plan evaluation is an important step in the process of selecting a cooperative response, and describe several tests that may usefully be applied to inferred plans. The second question is this: what is an appropriate architecture for cooperative NLQA (CNLQA) systems? I propose a four-level decomposition of the cooperative response generation process and then present a suitable CNLQA system architecture based on the blackboard model of problem solving
Improving performance of blackboard systems
In this thesis, we deal with blackboard system performance issues. We show that
blackboard system performance can be improved using parallel processing strategies
and a novel blackboard architecture.We study traditional blackboard architectures using a novel performance frame¬
work. This is a useful tool for directing system optimisation efforts. We present the
analysis of four blackboard systems present in the literature.nalysis of four blackboard systems present in the literature.
Besides localised optimisation efforts, one of the most promising approaches for
improving blackboard system performance is the use of parallel processing techniques.
However, traditional blackboard architectures present both data and control contention
when implemented in parallel.In this thesis we present a novel blackboard architecture, the Active Blackboard
Architecture (ABB). We based ABB on a novel variation of the traditional "Blackboard
and Experts" metaphor, called "Blackboard, Experts and Desks". This new metaphor
introduces a new element, the desks, used by the experts to perform their work.The ABB architecture is based on an active blackboard, capable of processing on its
own, and a decentralised control model. This avoids control contention and bottlenecks.
We describe this architecture using the Z specification language, and implemented
and evaluated in the EPCC Meiko Computing Surface, a multi-transputer distributed
memory parallel machine.The ABB Parallel prototype is an object oriented implementation of the ABB model
that overcomes both data and control bottlenecks by having a distributed blackboard
and using the ABB control model. Based on a series of experiments, we show that the
new architecture allows to achieve much greater effective parallelism in a blackboard
system. We also present some ways in which the system can be tailored to specific
application needs, improving in this way its overall performance
Blackboard System Generator (BSG): An Alternative Distributed Problem-Solving Paradigm
The classical blackboard model employs a number of relaxations of team decision theory that are commonly organized into three panels of AI heuristics, including: 1) a shared information panel that offers a capability for ensuring agent knowledge sharing, 2) a contract formalism for the agent and event scheduling, coordinating, and control panel, and 3) a blackboard panel for metalevel planning and guidance that offers whole situation recognition, top down reasoning, and adaptive learning. The nature and implications of these relaxations are explained in terms of the blackboard system generator (BSG) and via comparisons to what is done in other blackboard shells. Particular attention is paid to theoretical relaxations inherent in the classical blackboard model and to research opportunities arising as a result. Progress made to date to counteract adverse effects of some of these relaxations is described in terms of a project management/work breakdown paradigm adopted in BSG that: 1) alleviates the knowledge engineering bottlenecks of traditional blackboards and that provides BSG with a semantic rather than just syntactic understanding of blackboard control and scheduling; 2) allows a distributed problem-solving capability for connecting agents at virtual addresses on a logical network and that permits concurrent processing on any machine available on the network; 3) establishes an open architecture that includes techniques for integrating preexisting agent methods (e.g., expert systems, procedures, or data bases) while laying the foundation for assessing the impact of “black boxes” on the global and local objective functions; and 4) utilizes project management techniques for team agents planning as well as an analogical reasoner subsystem for BSG metaplanning and generic controlled learning. This latter item is supported by a connectionist scheme for its associative memory. The techniques of each of the three panels and of the four sets of paradigm-related advances are described along with selected results from classroom teaching experiments and from three applications using BSG to date
The Use of Artificial Intelligence Techniques for Protein Structure Prediction
The conventional technique for computerized protein structure prediction uses several programming languages such as Fortran, C, Pascal etc. With recent advances in programming languages and the development of rule-based systems, the computerized part of the problem is undergoing major change. This thesis sets out the idea of extending the properties of an intelligent rule-based system and recognising incomplete nature of knowledge for this problem. It reviews the existing architectures and characteristics that embody an intelligent system. As the outcome of the idea, a new system called PREDMOLL, written in Prolog, is developed. PREDMOLL is based on the blackboard architecture with several other extra features. This thesis also reviews some current uncertainty techniques and developes a formula based on a modifications of the Bayes theorem, to deal with multiple hypotheses. The problem of conditional independence assumption is reduced to the minimum. The formula is used as a decision-making criterion to determine secondary structure boundaries. For tertiary structure prediction, this thesis suggests a similarity value for primary sequence homology to overcome the problem of arbitrary uncertainty values in rules. PREDMOLL and the uncertainty techniques incorporated with it are used to test the hypothesis that the performance of protein structure prediction is improved by combining several methods. The test is carried out by a series of experimental predictions with user-defined rules and predefined constraints. The behaviour of PREDMOLL during the problemsolving process of the experiments is shown. The results obtained yield improvements in precision for secondary structure prediction and further improvements are expected. For tertiary structure prediction, some preliminary progress is shown and, due to lack of genuine rules, ad-hoc rules are generated from the protein data base. The status of PREDMOLL and its advantages over other systems is discussed. Several suggestions are made to improve current facilities in PREDMOLL and problems in a wider domain. Suggestions are also made for further improvements in tertiary structure prediction
Aerospace medicine and biology: A continuing bibliography with indexes (supplement 323)
This bibliography lists 125 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during April, 1989. Subject coverage includes; aerospace medicine and psychology, life support systems and controlled environments, safety equipment exobiology and extraterrestrial life, and flight crew behavior and performance
Recommended from our members
The Integration of Multiple and Diverse Knowledge Representation Paradigms using a Blackboard Architecture
There is increasing evidence that designers of future real-time embedded systems are turning to knowledge-based techniques in order to solve complex problems where algorithmic techniques have failed to produce a solution. In addition, many applications have been mandated to use the Ada programming language for all implementation software, including the knowledge-based components.
This thesis identifies three essential requirements needed to support the construction of these systems: first, the need to provide a library of Ada knowledge-based components that supports a variety of knowledge representation paradigms to model the diverse expert domains being encountered in complex applications; second, the need to provide the user with the means of creating and controlling multiple independent instances of the knowledge-based components to cope with the complexity and scale of the implementations; and third, the need to provide an integrating architecture in which the knowledge-based components may be embedded directly into an application environment.
These requirements have been satisfied by using ideas derived from the concept of abstract data types to construct a library of knowledge-based components; the components have been called abstract knowledge types. Subsequently, multiple instances of the abstract knowledge types have been integrated in modules called knowledge sources, which model specific problem knowledge domains. The knowledge sources have been used to construct a blackboard architecture.
The abstract knowledge types have been used to build a prototype university timetabling system in order to demonstrate their use. The research has shown that the abstract knowledge type integration approach results in a uniform implementation strategy for both conventional and knowledge-based components