10 research outputs found

    8. Issues in Intelligent Computer-Assisted Instruction: Eval uation and Measurement

    Get PDF
    In this chapter we plan to explore two issues in the field of intelligent computer assisted instruction (ICAI) that we feel offer opportunities to advance the state of the art. These issues are evaluation of ICAI systems and the use of the underlying technology in ICAI systems to develop tests. For each issue we will provide a theoretical context, discuss key constructs, provide a brief window to the appropriate literature, suggest methodological solutions and conclude with a concrete example of the feasibility of the solution from our own research. INTELLIGENT COMPUTER-ASSISTED INSTRUCTION (ICAI) ICAI is the application of artificial intelligence to computer-assisted instruction. Artificial intelligence, a branch of computer science, is making computers smart in order to (a) make them more useful and (b) understand intelligence (Winston, 1977). Topic areas in artificial intelligence have included natural language processing (Schank, 1980), vision (Winston, 1975), knowledge representation (Woods, 1983), spoken language (Lea, 1980), planning (Hayes-Roth, 1980), and expert systems (Buchanan, 1981). The field of Artificial Intelligence (AI) has matured in both hardware and software. The most commonly used language in the field is LISP (List Processing). A major development in the hardware area is that personal LISP machines are now available at a relatively low cost (20-50K) with the power of prior mainframes. In the software area two advances stand out: (a) programming support environments such as LOOPS (Bobrow & Stefik, 1983) and (b) expert system tools. These latter tools are now running on powerful micros. The application of expert systems technology to a host of real-world problems has demonstrated the utility of artificial intelligence techniques in a very dramatic style. Expert system technology is the branch of artificial intelligence at this point most relevant to ICAI. Expert Systems Knowledge-based systems or expert systems are a collection of problem-solving computer programs containing both factual and experiential knowledge and data in a particular domain. When the knowledge embodied in the program is a result of a human expert elicitation, these systems are called expert systems. A typical expert system consists of a knowledge base, a reasoning mechanism popularly called an inference engine and a friendly user interface. The knowledge base consists of facts, concepts, and numerical data (declarative knowledge), procedures based on experience or rules of thumb (heuristics), and causal or conditional relationships (procedural knowledge). The inference engine searches or reasons with or about the knowledge base to arrive at intermediate conclusions or final results during the course of problem solving. It effectively decides when and what knowledge should be applied, applies the knowledge and determines when an acceptable solution has been found. The inference engine employs several problem-solving strategies in arriving at conclusions. Two of the popular schemes involve starting with a good description or desired solution and working backwards to the known facts or current situation (backward chaining), and starting with the current situation or known facts and working toward a goal or desired solution (forward chaining). The user interface may give the user choices (typically menu-driven) or allow the user to participate in the control of the process (mixed initiative). The interface allows the user: to describe a problem, input knowledge or data, browse through the knowledge base, pose question, review the reasoning process of the system, intervene as necessary, and control overall system operation. Successful expert systems have been developed in fields as diverse as mineral exploration (Duda & Gaschnig, 1981) and medical diagnosis (Clancy, 1981). ICAI Systems ICAI systems use approaches artificial intelligence and cognitive science to teach a range of subject matters. Representative types of subjects include: (a) collection of facts, for example, South American geography in SCHOLAR (Carbonell & Collins, 1973); (b) complete system models, for example, a ship propulsion system in STEAMER (Stevens & Steinberg, 1981) and a power supply in SOPHIE (Brown, Burton, & de Kleer, 1982); (c) completely described procedural rules, for example, strategy learning, WEST (Brown, Burton, & de Kleer, 1982), or arithmetic in BUGGY (Brown & Burton, 1978); (d) partly described procedural rules, for example, computer programming in PROUST (Johnson & Soloway, 1983); LISP Tutor (Anderson, Boyle, & Reiser, 1985); rules in ALGEBRA (McArthur, Stasz, & Hotta, 1987); diagnosis of infectious diseases in GUIDON (Clancey, 1979); and an imperfectly understood complex domain, causes of rainfall in WHY (Stevens, Collins, & Goldin, 1978). Excellent reviews by Barr and Feigenbaum (1982) and Wenger (1987) document many of these ICAI systems. Representative research in ICAI is described by O\u27Neil, Anderson, and Freeman (1986) and Wenger (1987). Although suggestive evidence has been provided by Anderson et al. (1985), few of these ICAI projects have been evaluated in any rigorous fashion. In a sense they have all been toy systems for research and demonstration. Yet, they have raised a good deal of excitement and enthusiasm about their likelihood of being effective instructional environments. With respect to cognitive science, progress has been made in the following areas: identification and analysis of misconceptions or bugs (Clement, Lockhead, & Soloway, 1980), the use of learning strategies (O\u27Neil & Spielberger, 1979; Weinstein & Mayer, 1986), expert versus novice distinction (Chi, Glaser, & Rees, 1982), the role of mental models in learning (Kieras & Bovair, 1983), and the role of self-explanations in problem solving (Chi, Bassok, Lewis, Reimann, & Glaser, 1987). The key components of an ICAI system consist of a knowledge base: that is, (a) what the student is to learn; (b) a student model, either where the student is now with respect to subject matter or how student characteristics interact with subject matters, and (c) a tutor, that is, instructional techniques for teaching the declarative or procedural knowledge. These components are described in more detail by Fletcher (1985)

    User-centric Query Refinement and Processing Using Granularity Based Strategies

    Get PDF
    Under the context of large-scale scientific literatures, this paper provides a user-centric approach for refining and processing incomplete or vague query based on cognitive- and granularity-based strategies. From the viewpoints of user interests retention and granular information processing, we examine various strategies for user-centric unification of search and reasoning. Inspired by the basic level for human problem-solving in cognitive science, we refine a query based on retained user interests. We bring the multi-level, multi-perspective strategies from human problem-solving to large-scale search and reasoning. The power/exponential law-based interests retention modeling, network statistics-based data selection, and ontology-supervised hierarchical reasoning are developed to implement these strategies. As an illustration, we investigate some case studies based on a large-scale scientific literature dataset, DBLP. The experimental results show that the proposed strategies are potentially effective. © 2010 Springer-Verlag London Limited

    Use of an Intelligent Tutoring System and Academic Performance in an Online College Course for Pre-Nursing Students

    Get PDF
    The efficacy of intelligent tutoring systems (ITS) for undergraduate college level courses was not well established and specifically, the Pearson Dynamic Study Modules (PDSM) program had not been investigated locally. The purpose of this quantitative study was to determine whether the use of an ITS designed with a cognitive learning approach; the PDSM, would enhance pre-nursing student academic performance. The multiple attribute decision making and the human plausible reasoning theories grounded the study. A non-experimental quantitative research design was used to determine whether there was a difference in the assessment scores of pre-nursing students in a college level anatomy and physiology course based on their use of the PDSM while controlling for prior GPA. A multiple analysis of covariance test was used to compare the archival scores of pre-nursing students (N = 99) from twelve online sections of an anatomy and physiology course from a small Midwestern college where the PDSM program was an available study aid for the students. This study looked at the cumulated use of the ITS over the course of a full 16-week semester and confirming conclusions from similar studies, found that there was no causal relationship between the use of the PDSM and the students’ assessment scores when controlling for prior GPA. Recommendations for future studies include a focus on individual chapters and the amount of intelligent tutor use within the chapter itself to determine if there is any effect on individual chapter assessment scores. Positive social change is facilitated for undergraduate nursing students when research-derived study strategies are identified for inclusion or exclusion to enhance students’ academic performance

    A Computer Model of Conversation

    Get PDF
    This paper is addressed to the problem of how it is possible to conduct coherent, purposeful conversations. It describes a computer model of a conversation between two robots, each robot being represented by a section of program. The conversation is conducted in a small subset of English, and is a mixed-initiative dialogue which can involve interruptions and the nesting of one segment of dialogue in another. The conversation is meant to arise naturally from a well defined setting, so that it is clear whether or not the robots are saying appropriate things. They are placed in a simple world of a few objects, and co-operate in order to achieve a practical goal in this world. Their conversation arises out of this common aim; they have to agree on a plan, exchange information, discuss the consequences of their actions, and so on. In previous language-using programs, the conversation has been conducted by a robot and a human operator, rather than by two robots. In these systems, it is almost always the human operator who takes the initiative and determines the overall structure of the dialogue, and the processes by which he does so are hidden away in his mind. The aim of our program is to make these processes totally explicit, and it is for this reason that we have used two robots and avoided human participation. Thus the main focus of interest is not the structuring of individual utterances, but the higher-level organisation of the dialogue, and how the dialogue is related to the private thoughts which underlie it. The program has two kinds of procedure, which we call ROUTINES and GAMES, the Games being used to conduct sections of conversation and the Routines to conduct the underlying thoughts. These procedures can call each other in the normal way. Thus the occurrence of a section of dialogue will be caused by the call of a Game by a Routine; and when the section of dialogue ends, the Game will exit, returning control to the Routine which called it. There are several Games, each corresponding to a common conversational pattern, such as a question and its answer, or a plan suggestion and the response to it. The Games determine what can be said, who will say it, how each remark will be analysed, and how it will be responded to. They are thus joint procedures, in which the instructions are divided up between the robots. When a section of dialogue occurs, the relevant Game will be loaded in the minds of both robots, but they will have adopted different roles in the Game, and will consequently perform different instructions and make different utterances

    Design criteria for a knowledge-based English language system for management : an experimental analysis

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Alfred P. Sloan School of Management, 1975."February 1975." Vita.Bibliography: leaves 240-246.by Ashok Malhotra.Ph.D

    Outils d'élaboration de stratégie de recyclage basée sur la gestion des connaissances : application au domaine du génie des procédés

    Get PDF
    Dans ce travail, une étude est réalisée sur le développement d'une méthodologie permettant la génération et l'évaluation de nouvelles trajectoires de valorisation pour des déchets. Ainsi, pour répondre à cette problématique, trois sous problèmes ont été identifiés. Le premier concerne un cadre de modélisation permettant la représentation structurée et homogène de chaque trajectoire, ainsi que les indicateurs choisis pour l'évaluation de ces dernières, permettant une sélection ultérieure. Le deuxième se concentre sur le développement d'une méthodologie puis la réalisation d'un outil permettant la génération de nouvelles trajectoires en s'appuyant sur d'autres connues. Enfin, le dernier sous problème concerne le développement d'un second outil développé pour modéliser et estimer les trajectoires générées. La partie de création d'un cadre de modélisation cherche à concevoir des structures globales qui permettent la catégorisation des opérations unitaires sous plusieurs niveaux. Trois niveaux de décomposition ont été identifiés. La Configuration générique de plus haut niveau, qui décrit la trajectoire sous de grandes étapes de modélisation. Le second niveau, Traitement générique propose des ensembles de structures génériques de traitement qui apparaissent régulièrement dans les trajectoires de valorisation. Enfin, le plus bas niveau se focalise sur la modélisation des opérations unitaires. Un second cadre a été créé, plus conceptuel et comportant deux éléments : les blocs et les systèmes. Ces cadres sont ensuite accompagnés par un ensemble d'indicateurs choisis à cet effet. Dans une volonté d'approche de développement durable, un indicateur est sélectionné pour chacune de des composantes : économique, environnemental et social. Dans notre étude, l'impact social se limite à l'estimation du nombre d'emplois créés. Afin de calculer cet indicateur, une nouvelle approche se basant sur les résultats économiques d'une entreprise a été proposée et validée.L'outil de génération de nouvelles trajectoires s'appuie sur l'utilisation de la connaissance en utilisant un système de raisonnement à partir de cas (RàPC). Pour être adapté à notre problématique, la mise en œuvre de ce dernier a impliqué la levée de plusieurs points délicats. Tout d'abord, la structuration des données et plus largement la génération de cas sources sont réalisées par un système basé sur des réseaux sémantiques et l'utilisation de mécanismes d'inférences. Le développement d'une nouvelle méthode de mesure de similarité est réalisé en introduisant la notion de définition commune qui permet de lier les états, qui sont des descriptions de situations, à des états représentant des définitions générales d'un ensemble d'états. Ces définitions communes permettent la création d'ensembles d'états sous différents niveaux d'abstraction et de conceptualisation. Enfin, un processus de décompositions des trajectoires est réalisé afin de résoudre un problème grâce à la résolution de ses sous-problèmes associés. Cette décomposition facilite l'adaptation des trajectoires et l'estimation des résultats des transformations. Basé sur cette méthode, un outil a été développé en programmation logique, sous Prolog. La modélisation et l'évaluation des voies de valorisation se fait grâce à la création d'outil spécifique. Cet outil utilise la méta-programmation permettant la réalisation dynamique de modèle de structure. Le comportement de ces structures est régi par la définition de contraintes sur les différents flux circulants dans l'ensemble de la trajectoire. Lors de la modélisation de la trajectoire, ces contraintes sont converties par un parser permettant la réalisation d'un modèle de programmation par contraintes cohérent. Ce dernier peut ensuite être résolu grâce à des solveurs via une interface développée et intégrée au système. De même, plusieurs greffons ont été réalisés pour analyser et évaluer les trajectoires à l'aide des critères retenus
    corecore