449,304 research outputs found

    KOMBASE - a knowledge representation system with frames for an object-oriented knowledge base

    Get PDF
    Knowledge representation is an important area of research which is currently being done in the field of Artificial Intelligence (AI). In order to manipulate the wealth of information available in a typical AI application, mechanisms must be provided to represent and to reason with knowledge at a high level of abstraction. Knowledge representation with frames is a structured and object-oriented approach to this problem. KOMBASE is a prototype to a frame-based system containing organizational information of companies and other corporate bodies. This paper describes the approach adopted in the development of KOMBASE and discusses its implementation, particularly from a knowledge representational perspective

    Capturing design knowledge

    Get PDF
    A scheme is proposed to capture the design knowledge of a complex object including functional, structural, performance, and other constraints. Further, the proposed scheme is also capable of capturing the rationale behind the design of an object as a part of the overall design of the object. With this information, the design of an object can be treated as a case and stored with other designs in a case base. A person can then perform case-based reasoning by examining these designs. Methods of modifying object designs are also discussed. Finally, an overview of an approach to fault diagnosis using case-based reasoning is given

    How Is a Knowledge Representation System Like a Piano?

    Get PDF
    The research reported here was supported by National Institutes of Health Grant No. 1 P41 RR 01096-02 from the Division of Research Resources, and was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology.In the summer of 1978 a decision was made to devote a special issue of the SIGART newsletter to the subject of knowledge representation research. To assist in ascertaining the current state of people's thinking on this topic, the editors (Ron Brachman and myself) decided to circulate an informal questionnaire among the representation community. What was originally planned as a simple list of questions eventually developed into the current document, and we have decided to issue it as a report on its own merits. The questionnaire is offered here as a potential aid both for understanding knowledge representation research, and for analysing the philosophical foundations on which that research is based. The questionnaire consists of two parts. Part I focuses first on specific details, but moves gradually towards more abstract and theoretical questions regarding assumptions about what knowledge representation is; about the role played by the computational metaphor about the relationships among model, theory, and program; etc. In part II, in a more speculative vein, we set forth for consideration nine hypothesis about various open issues in representation research.MIT Artificial Intelligence Laboratory National Institutes of Healt

    Australia Country Profile

    Get PDF
    [From Introduction] This country study for Australia is part of the ILO project \u27Employment of People with Disabilities – the Impact of Legislation’ which aims to enhance the capacity of national governments in selected countries of Asia and East Africa to implement effective legislation concerning the employment of people with disabilities. Starting with a systematic examination of laws in place to promote employment and training opportunities for people with disabilities in selected countries of Asia and the Pacific (Australia, Cambodia, China, Fiji, Japan, India, Mongolia, Sri Lanka and Thailand), the project sets out to examine the operation of such legislation, identify the implementation mechanisms in place and suggest improvements Technical assistance is provided to selected national governments in implementing necessary improvements

    Interpretable Convolutional Neural Networks

    Full text link
    This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs.Comment: In this version, we release the website of the code. Compared to the previous version, we have corrected all values of location instability in Table 3--6 by dividing the values by sqrt(2), i.e., a=a/sqrt(2). Such revisions do NOT decrease the significance of the superior performance of our method, because we make the same correction to location-instability values of all baseline
    corecore