2 research outputs found

    Achieving High Quality Knowledge Acquisition using Controlled Natural Language

    Get PDF
    Controlled Natural Languages (CNLs) are efficient languages for knowledge acquisition and reasoning. They are designed as a subset of natural languages with restricted grammar while being highly expressive. CNLs are designed to be automatically translated into logical representations, which can be fed into rule engines for query and reasoning. In this work, we build a knowledge acquisition machine, called KAM, that extends Attempto Controlled English (ACE) and achieves three goals. First, KAM can identify CNL sentences that correspond to the same logical representation but expressed in various syntactical forms. Second, KAM provides a graphical user interface (GUI) that allows users to disambiguate the knowledge acquired from text and incorporates user feedback to improve knowledge acquisition quality. Third, KAM uses a paraconsistent logical framework to encode CNL sentences in order to achieve reasoning in the presence of inconsistent knowledge

    Controlled Natural Languages for Knowledge Representation and Reasoning

    Get PDF
    Controlled natural languages (CNLs) are effective languages for knowledge representation and reasoning. They are designed based on certain natural languages with restricted lexicon and grammar. CNLs are unambiguous and simple as opposed to their base languages. They preserve the expressiveness and coherence of natural languages. In this paper, it mainly focuses on a class of CNLs, called machine-oriented CNLs, which have well-defined semantics that can be deterministically translated into formal languages to do logical reasoning. Although a number of machine-oriented CNLs emerged and have been used in many application domains for problem solving and question answering, there are still many limitations: First, CNLs cannot handle inconsistencies in the knowledge base. Second, CNLs are not powerful enough to identify different variations of a sentence and therefore might not return the expected inference results. Third, CNLs do not have a good mechanism for defeasible reasoning. This paper addresses these three problems and proposes a research plan for solving these problems. It also shows the current state of research: a paraconsistent logical framework from which six principles that guide the user to encode CNL sentences were created. Experiment results show this paraconsistent logical framework and these six principles can consistently and effectively solve word puzzles with injections of inconsistencies
    corecore