135,781 research outputs found

    The Knowledge Level in Cognitive Architectures: Current Limitations and Possible Developments

    Get PDF
    In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge. We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build articial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of the comparison of the CAs' knowledge representation and processing mechanisms with those executed by humans in their everyday activities. In the final part of the paper further directions of research will be explored, trying to address current limitations and future challenges

    Terminologia Anatomica; Considered from the Perspective of Next-Generation Knowledge Sources

    Get PDF
    This report examines the semantic structure of Terminologia Anatomica, taking one randomly selected page as an example. The focus of analysis is the meaning imparted to an anatomical term by virtue of its location within the structured list. Terminologia’s structure expressed through hierarchies of headings, varied typographical styles, indentations and an alphanumeric code implies specific relationships between the terms embedded in the list. Together, terms and relationships can potentially capture essential elements of anatomical knowledge. The analysis focuses on these knowledge elements and evaluates the consistency and logic in their representation. Most critical of these elements are class inclusion and part-whole relationships, which are implied, rather than explicitly modeled by Terminologia. This limits the use of the term list to those who have some knowledge of anatomy and excludes computer programs from navigating through the terminology. Assuring consistency in the explicit representation of anatomical relationships would facilitate adoption of Terminologia as the anatomical standard by the various controlled medical terminology (CMT) projects. These projects are motivated by the need for computerizing the patient record, and their aim is to generate machineunderstandable representations of biomedical concepts, including anatomy. Because of the lack of a consistent and explicit representation of anatomy, each of these CMTs has generated it own anatomy model. None of these models is compatible with each other, yet each is consistent with textbook descriptions of anatomy. The analysis of the semantic structure of Terminologia Anatomica leads to some suggestions for enhancing the term list in ways that would facilitate its adoption as the standard for anatomical knowledge representation in biomedical informatics

    Towards ontology interoperability through conceptual groundings

    Get PDF
    Abstract. The widespread use of ontologies raises the need to resolve heterogeneities between distinct conceptualisations in order to support interoperability. The aim of ontology mapping is, to establish formal relations between a set of knowledge entities which represent the same or a similar meaning in distinct ontologies. Whereas the symbolic approach of established SW representation standards – based on first-order logic and syllogistic reasoning – does not implicitly represent similarity relationships, the ontology mapping task strongly relies on identifying semantic similarities. However, while concept representations across distinct ontologies hardly equal another, manually or even semi-automatically identifying similarity relationships is costly. Conceptual Spaces (CS) enable the representation of concepts as vector spaces which implicitly carry similarity information. But CS provide neither an implicit representational mechanism nor a means to represent arbitrary relations between concepts or instances. In order to overcome these issues, we propose a hybrid knowledge representation approach which extends first-order logic ontologies with a conceptual grounding through a set of CS-based representations. Consequently, semantic similarity between instances – represented as members in CS – is indicated by means of distance metrics. Hence, automatic similarity-detection between instances across distinct ontologies is supported in order to facilitate ontology mapping

    Mathematical Foundations for a Compositional Distributional Model of Meaning

    Full text link
    We propose a mathematical framework for a unification of the distributional theory of meaning in terms of vector space models, and a compositional theory for grammatical types, for which we rely on the algebra of Pregroups, introduced by Lambek. This mathematical framework enables us to compute the meaning of a well-typed sentence from the meanings of its constituents. Concretely, the type reductions of Pregroups are `lifted' to morphisms in a category, a procedure that transforms meanings of constituents into a meaning of the (well-typed) whole. Importantly, meanings of whole sentences live in a single space, independent of the grammatical structure of the sentence. Hence the inner-product can be used to compare meanings of arbitrary sentences, as it is for comparing the meanings of words in the distributional model. The mathematical structure we employ admits a purely diagrammatic calculus which exposes how the information flows between the words in a sentence in order to make up the meaning of the whole sentence. A variation of our `categorical model' which involves constraining the scalars of the vector spaces to the semiring of Booleans results in a Montague-style Boolean-valued semantics.Comment: to appea

    Decision making with decision event graphs

    Get PDF
    We introduce a new modelling representation, the Decision Event Graph (DEG), for asymmetric multistage decision problems. The DEG explicitly encodes conditional independences and has additional significant advantages over other representations of asymmetric decision problems. The colouring of edges makes it possible to identify conditional independences on decision trees, and these coloured trees serve as a basis for the construction of the DEG. We provide an efficient backward-induction algorithm for finding optimal decision rules on DEGs, and work through an example showing the efficacy of these graphs. Simplifications of the topology of a DEG admit analogues to the sufficiency principle and barren node deletion steps used with influence diagrams

    Technological Spaces: An Initial Appraisal

    Get PDF
    In this paper, we propose a high level view of technological spaces (TS) and relations among these spaces. A technological space is a working context with a set of associated concepts, body of knowledge, tools, required skills, and possibilities. It is often associated to a given user community with shared know-how, educational support, common literature and even workshop and conference regular meetings. Although it is difficult to give a precise definition, some TSs can be easily identified, e.g. the XML TS, the DBMS TS, the abstract syntax TS, the meta-model (OMG/MDA) TS, etc. The purpose of our work is not to define an abstract theory of technological spaces, but to figure out how to work more efficiently by using the best possibilities of each technology. To do so, we need a basic understanding of the similarities and differences between various TSs, and also of the possible operational bridges that will allow transferring the results obtained in one TS to other TS. We hope that the presented industrial vision may help us putting forward the idea that there could be more cooperation than competition among alternative technologies. Furthermore, as the spectrum of such available technologies is rapidly broadening, the necessity to offer clear guidelines when choosing practical solutions to engineering problems is becoming a must, not only for teachers but for project leaders as well

    Semantics, Modelling, and the Problem of Representation of Meaning -- a Brief Survey of Recent Literature

    Full text link
    Over the past 50 years many have debated what representation should be used to capture the meaning of natural language utterances. Recently new needs of such representations have been raised in research. Here I survey some of the interesting representations suggested to answer for these new needs.Comment: 15 pages, no figure

    Generic task problem solvers in Soar

    Get PDF
    Two trends can be discerned in research in problem solving architectures in the last few years. On one hand, interest in task-specific architectures has grown, wherein types of problems of general utility are identified, and special architectures that support the development of problem solving systems for those types of problems are proposed. These architectures help in the acquisition and specification of knowledge by providing inference methods that are appropriate for the type of problem. However, knowledge based systems which use only one type of problem solving method are very brittle, and adding more types of methods requires a principled approach to integrating them in a flexible way. Contrasting with this trend is the proposal for a flexible, general architecture contained in the work on Soar. Soar has features which make it attractive for flexible use of all potentially relevant knowledge or methods. But as the theory Soar does not make commitments to specific types of problem solvers or provide guidance for their construction. It was investigated how task-specific architectures can be constructed in Soar to retain as many of the advantages as possible of both approaches. Examples were used from the Generic Task approach for building knowledge based systems. Though this approach was developed and applied for a number of problems, the ideas are applicable to other task-specific approaches as well

    The Partial Evaluation Approach to Information Personalization

    Get PDF
    Information personalization refers to the automatic adjustment of information content, structure, and presentation tailored to an individual user. By reducing information overload and customizing information access, personalization systems have emerged as an important segment of the Internet economy. This paper presents a systematic modeling methodology - PIPE (`Personalization is Partial Evaluation') - for personalization. Personalization systems are designed and implemented in PIPE by modeling an information-seeking interaction in a programmatic representation. The representation supports the description of information-seeking activities as partial information and their subsequent realization by partial evaluation, a technique for specializing programs. We describe the modeling methodology at a conceptual level and outline representational choices. We present two application case studies that use PIPE for personalizing web sites and describe how PIPE suggests a novel evaluation criterion for information system designs. Finally, we mention several fundamental implications of adopting the PIPE model for personalization and when it is (and is not) applicable.Comment: Comprehensive overview of the PIPE model for personalizatio
    corecore