27,173 research outputs found

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers

    Fourth Conference on Artificial Intelligence for Space Applications

    Get PDF
    Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming

    Rationality in discovery : a study of logic, cognition, computation and neuropharmacology

    Get PDF
    Part I Introduction The specific problem adressed in this thesis is: what is the rational use of theory and experiment in the process of scientific discovery, in theory and in the practice of drug research for Parkinson’s disease? The thesis aims to answer the following specific questions: what is: 1) the structure of a theory?; 2) the process of scientific reasoning?; 3) the route between theory and experiment? In the first part I further discuss issues about rationality in science as introduction to part II, and I present an overview of my case-study of neuropharmacology, for which I interviewed researchers from the Groningen Pharmacy Department, as an introduction to part III. Part II Discovery In this part I discuss three theoretical models of scientific discovery according to studies in the fields of Logic, Cognition, and Computation. In those fields the structure of a theory is respectively explicated as: a set of sentences; a set of associated memory chunks; and as a computer program that can generate the observed data. Rationality in discovery is characterized by: finding axioms that imply observation sentences; heuristic search for a hypothesis, as part of problem solving, by applying memory chunks and production rules that represent skill; and finding the shortest program that generates the data, respectively. I further argue that reasoning in discovery includes logical fallacies, which are neccesary to introduce new hypotheses. I also argue that, while human subjects often make errors in hypothesis evaluation tasks from a logical perspective, these evaluations are rational given a probabilistic interpretation. Part III Neuropharmacology In this last part I discusses my case-study and a model of discovery in a practice of drug research for Parkinson’s disease. I discuss the dopamine theory of Parkinson’s disease and model its structure as a qualitative differential equation. Then I discuss the use and reasons for particular experiments to both test a drug and explore the function of the brain. I describe different kinds of problems in drug research leading to a discovery. Based on that description I distinguish three kinds of reasoning tasks in discovery, inference to: the best explanation, the best prediction and the best intervention. I further demonstrate how a part of reasoning in neuropharmacology can be computationally modeled as qualitative reasoning, and aided by a computer supported discovery system

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Second CLIPS Conference Proceedings, volume 1

    Get PDF
    Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems

    The Effectiveness of Case-Based Reasoning: An Application in Sales Promotions

    Get PDF
    This paper deals with Case-based Reasoning (CBR) as a support technology for sales promotion (SP) decisions. CBR-systems try to mimic analogical reasoning, a form of human reasoning that is likely to occur in weakly-structured problem solving, such as the design of sales promotions. In an empirical study, we find evidence that use of the CBR-system improves the quality of SP-campaign proposals. In terms of the creativity of the proposals, decision-makers who think highly divergent (i.e., who tend to generate many, and diverse ideas in response to a problem) benefit most from prolonged system usage. Creativity, in turn, is positively related to the (practical) usability of a proposal. These results suggest that the CBR-system is most effective when it is used as an idea-generation tool that reinforces the strength of divergent (creative) thinkers. A convergent thinking style, in which case the CBR-system has a compensating role, even has a negative impact on CBR-system usage. Increasing the decision-maker's personal belief in the usefulness of the system, e.g., by training or education, may help to alleviate this reluctance to use the CBR-system.marketing management support systems;sales promotions;case-based reasoning;weakly-structured decision making

    Learning by Seeing by Doing: Arithmetic Word Problems

    Get PDF
    Learning by doing in pursuit of real-world goals has received much attention from education researchers but has been unevenly supported by mathematics education software at the elementary level, particularly as it involves arithmetic word problems. In this article, we give examples of doing-oriented tools that might promote children\u27s ability to see significant abstract structures in mathematical situations. The reflection necessary for such seeing is motivated by activities and contexts that emphasize affective and social aspects. Natural language, as a representation already familiar to children, is key in these activities, both as a means of mathematical expression and as a link between situations and various abstract representations. These tools support children\u27s ownership of a mathematical problem and its expression; remote sharing of problems and data; software interpretation of children\u27s own word problems; play with dynamically linked representations with attention to children\u27s prior connections; and systematic problem variation based on empirically determined level of difficulty

    Direct and constructivist approaches for the design of instruction in well-structured domains: a comparison of efficiency via mental workload and performance.

    Get PDF
    This doctoral research investigates the efficiency of two instructional designs: a design based on the direct-instruction approach to learning and its extension with a collaborative activity based upon the community of inquiry approach to learning. This is motivated by the educational challenge associated with the improvement of the learning phase. The goal is to investigate the extent to which highly guided communities of inquiry, when added to direct-instruction teaching methods, can actually improve the efficiency of learners. A total of 577 students participated in the experiments across 24 third-level classes that were divided into two groups. A control group of learners attended a delivery based on direct instructional guidelines only, while an experimental group received the same delivery (in equal conditions) extended through a collaborative and inquiring design. Subsequently, learners of each group individually answered a multiple-choice questionnaire (MCQ), from which a performance measure was extracted for the evaluation of the acquired factual, conceptual and procedural knowledge. Two measures of cognitive load (CL) were acquired through self-reporting questionnaires: one unidimensional and one multidimensional. These, in conjunction with the performance measure, contributed to the definition of three measures of efficiency. Statistical evidence shows a positive impact of the experimental layout on the efficiency scores of students, as a consequence of its improvement across three phases: tuning, experimental and refined. The minor contribution to the body of knowledge is a replicable primary research that requalifies an inquiry activity technique, usually employed at primary and secondary levels, as well as other ill-structured domains, in better-structured domains within thirdlevel education. This contribution is connected to a major one that lies in the example of the complementarity between cognitivist direct instructional techniques and social constructivist approaches to teaching and to learning, rather than in the example of their individual, distinct and competitive uses
    corecore