465 research outputs found

    An endorsement-based approach to student modeling for planner-controlled intelligent tutoring systems

    Get PDF
    An approach is described to student modeling for intelligent tutoring systems based on an explicit representation of the tutor's beliefs about the student and the arguments for and against those beliefs (called endorsements). A lexicographic comparison of arguments, sorted according to evidence reliability, provides a principled means of determining those beliefs that are considered true, false, or uncertain. Each of these beliefs is ultimately justified by underlying assessment data. The endorsement-based approach to student modeling is particularly appropriate for tutors controlled by instructional planners. These tutors place greater demands on a student model than opportunistic tutors. Numerical calculi approaches are less well-suited because it is difficult to correctly assign numbers for evidence reliability and rule plausibility. It may also be difficult to interpret final results and provide suitable combining functions. When numeric measures of uncertainty are used, arbitrary numeric thresholds are often required for planning decisions. Such an approach is inappropriate when robust context-sensitive planning decisions must be made. A TMS-based implementation of the endorsement-based approach to student modeling is presented, this approach is compared to alternatives, and a project history is provided describing the evolution of this approach

    About Norms and Causes

    Full text link
    Knowing the norms of a domain is crucial, but there exist no repository of norms. We propose a method to extract them from texts: texts generally do not describe a norm, but rather how a state-of-affairs differs from it. Answers concerning the cause of the state-of-affairs described often reveal the implicit norm. We apply this idea to the domain of driving, and validate it by designing algorithms that identify, in a text, the "basic" norms to which it refers implicitly

    ARC-TEC : acquisition, representation and compilation of technical knowledge

    Get PDF
    A global description of an expert system shell for the domain of mechanical engineering is presented. The ARC-TEC project constitutes an AI approach to realize the CIM idea. Along with conceptual solutions, it provides a continuous sequence of software tools for the acquisition, representation and compilation of technical knowledge. The shell combines the KADS knowledge-acquisition methodology, the KL-ONE representation theory and the WAM compilation technology. For its evaluation a prototypical expert system for production planning is developed. A central part of the system is a knowledge base formalizing the relevant aspects of common sense in mechanical engineering. Thus, ARC-TEC is less general than the CYC project but broader than specific expert systems for planning or diagnosis

    How much of commonsense and legal reasoning is formalizable? A review of conceptual obstacles

    Get PDF
    Fifty years of effort in artificial intelligence (AI) and the formalization of legal reasoning have produced both successes and failures. Considerable success in organizing and displaying evidence and its interrelationships has been accompanied by failure to achieve the original ambition of AI as applied to law: fully automated legal decision-making. The obstacles to formalizing legal reasoning have proved to be the same ones that make the formalization of commonsense reasoning so difficult, and are most evident where legal reasoning has to meld with the vast web of ordinary human knowledge of the world. Underlying many of the problems is the mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities

    Entwicklung von Expertensystemen : Prototypen, Tiefenmodellierung und kooperative Wissensevolution

    Get PDF

    The VERBMOBIL domain model version 1.0

    Get PDF
    This report describes the domain model used in the German Machine Translation project VERBMOBIL. In order make the design principles underlying the modeling explicit, we begin with a brief sketch of the VERBMOBIL demonstrator architecture from the perspective of the domain model. We then present some rather general considerations on the nature of domain modeling and its relationship to semantics. We claim that the semantic information contained in the model mainly serves two tasks. For one thing, it provides the basis for a conceptual transfer from German to English; on the other hand, it provides information needed for disambiguation. We argue that these tasks pose different requirements, and that domain modeling in general is highly task-dependent. A brief overview of domain models or ontologies used in existing NLP systems confirms this position. We finally describe the different parts of the domain model, explain our design decisions, and present examples of how the information contained in the model can be actually used in the VERBMOBIL demonstrator. In doing so, we also point out the main functionality of FLEX, the Description Logic system used for the modeling

    Knowledge-driven Natural Language Understanding of English Text and its Applications

    Full text link
    Understanding the meaning of a text is a fundamental challenge of natural language understanding (NLU) research. An ideal NLU system should process a language in a way that is not exclusive to a single task or a dataset. Keeping this in mind, we have introduced a novel knowledge driven semantic representation approach for English text. By leveraging the VerbNet lexicon, we are able to map syntax tree of the text to its commonsense meaning represented using basic knowledge primitives. The general purpose knowledge represented from our approach can be used to build any reasoning based NLU system that can also provide justification. We applied this approach to construct two NLU applications that we present here: SQuARE (Semantic-based Question Answering and Reasoning Engine) and StaCACK (Stateful Conversational Agent using Commonsense Knowledge). Both these systems work by "truly understanding" the natural language text they process and both provide natural language explanations for their responses while maintaining high accuracy.Comment: Preprint. Accepted by the 35th AAAI Conference (AAAI-21) Main Track

    Typed feature structures, definite equivalences, greatest model semantics, and nonmonotonicity

    Get PDF
    Typed feature logics have been employed as description languages in modern type-oriented grammar theories like HPSG and have laid the theoretical foundations for many implemented systems. However, recursivity pose severe problems and have been addressed through specialized powerdomain constructions which depend on the particular view of the logician. In this paper, we argue that definite equivalences introduced by Smolka can serve as the formal basis for arbitrarily formalized typed feature structures and typed feature-based grammars/lexicons, as employed in, e.g., TFS or TDL. The idea here is that type definitions in such systems can be transformed into an equivalent definite program, whereas the meaning of the definite program then is identified with the denotation of the type system. Now, models of a definite program P can be characterized by the set of ground atoms which are logical consequences of the definite program. These models are ordered by subset inclusion and, for reasons that will become clear, we propose the greatest model as the intended interpretation of P, or equivalent, as the denotation of the associated type system. Our transformational approach has also a great impact on nonmonotonically defined types, since under this interpretation, we can view the type hierarchy as a pure transport medium, allowing us to get rid of the transitivity of type information (inheritance), and yielding a perfectly monotonic definite program
    corecore