750 research outputs found

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    On the incorporation of interval-valued fuzzy sets into the Bousi-Prolog system: declarative semantics, implementation and applications

    Full text link
    In this paper we analyse the benefits of incorporating interval-valued fuzzy sets into the Bousi-Prolog system. A syntax, declarative semantics and im- plementation for this extension is presented and formalised. We show, by using potential applications, that fuzzy logic programming frameworks enhanced with them can correctly work together with lexical resources and ontologies in order to improve their capabilities for knowledge representation and reasoning

    Using concept similarity in cross ontology for adaptive e-Learning systems

    Get PDF
    Abstracte-Learning is one of the most preferred media of learning by the learners. The learners search the web to gather knowledge about a particular topic from the information in the repositories. Retrieval of relevant materials from a domain can be easily implemented if the information is organized and related in some way. Ontologies are a key concept that helps us to relate information for providing the more relevant lessons to the learner. This paper proposes an adaptive e-Learning system, which generates a user specific e-Learning content by comparing the concepts with more than one system using similarity measures. A cross ontology measure is defined, which consists of fuzzy domain ontology as the primary ontology and the domain expert’s ontology as the secondary ontology, for the comparison process. A personalized document is provided to the user with a user profile, which includes the data obtained from the processing of the proposed method under a User score, which is obtained through the user evaluation. The results of the proposed e-Learning system under the designed cross ontology similarity measure show a significant increase in performance and accuracy under different conditions. The assessment of the comparative analysis, showed the difference in performance of our proposed method over other methods. Based on the assessment results it is proved that the proposed approach is effective over other methods

    Towards new information resources for public health: From WordNet to MedicalWordNet

    Get PDF
    In the last two decades, WORDNET has evolved as the most comprehensive computational lexicon of general English. In this article, we discuss its potential for supporting the creation of an entirely new kind of information resource for public health, viz. MEDICAL WORDNET. This resource is not to be conceived merely as a lexical extension of the original WORDNET to medical terminology; indeed, there is already a considerable degree of overlap between WORDNET and the vocabulary of medicine. Instead, we propose a new type of repository, consisting of three large collections of (1) medically relevant word forms, structured along the lines of the existing Princeton WORDNET; (2) medically validated propositions, referred to here as medical facts, which will constitute what we shall call MEDICAL FACTNET; and (3) propositions reflecting laypersons’ medical beliefs, which will constitute what we shall call the MEDICAL BELIEFNET. We introduce a methodology for setting up the MEDICAL WORDNET. We then turn to the discussion of research challenges that have to be met in order to build this new type of information resource

    Computational Approaches to Measuring the Similarity of Short Contexts : A Review of Applications and Methods

    Full text link
    Measuring the similarity of short written contexts is a fundamental problem in Natural Language Processing. This article provides a unifying framework by which short context problems can be categorized both by their intended application and proposed solution. The goal is to show that various problems and methodologies that appear quite different on the surface are in fact very closely related. The axes by which these categorizations are made include the format of the contexts (headed versus headless), the way in which the contexts are to be measured (first-order versus second-order similarity), and the information used to represent the features in the contexts (micro versus macro views). The unifying thread that binds together many short context applications and methods is the fact that similarity decisions must be made between contexts that share few (if any) words in common.Comment: 23 page
    • …
    corecore