18 research outputs found

    Lessons in Machine Ethics from the Perspective of Two Computational Models of Ethical Reasoning

    No full text
    In this paper, two computational models of ethical reasoning, one that compares pairs of truth-telling cases and one that retrieves relevant past cases and principles when presented with an ethical dilemma, are described and discussed. Lessons learned from developing and experimenting with the two systems, as well as challenges of building programs that reason about ethics, are discussed. Finally, plans for developing an intelligent tutor for ethics using one of the computational models as a basis is presented

    Extensionally Defining Principles and Cases in Ethics: an AI model

    No full text
    Principles are abstract rules intended to guide decision-makers in making normative judgments in domains like the law, politics, and ethics. It is difficult, however, if not impossible to define principles in an intensional manner so that they may be applied deductively. The problem is the gap between the abstract, open-textured principles and concrete facts. On the other hand, when expert decision-makers rationalize their conclusions in specific cases, they often link principles to the specific facts of the cases. In effect, these expert-defined associations between principles and facts provide extensional definitions of the principles. The experts operationalize the abstract principles by linking them to the facts. This paper discusses research in which the following hypothesis was empirically tested: extensionally defined principles, as well as cited past cases, can help in predicting the principles and cases that might be relevant in the analysis of new cases. To investigate this phenomenon computationally, a large set of professional ethics cases was analyzed and a computational model called SIROCCO, a system for retrieving principles and past cases, was constructed. Empirical evidence is presented that the operationalization information contained in extensionally defined principles can be leveraged to predict the principles and past cases that are relevant to new problem situations. This is shown through an ablation experiment, comparing SIROCCO to a version of itself that does not employ operationalization information. Further, it is shown that SIROCCO’s extensionally defined principles and case citations help it to outperform a full-text retrieval program that does not employ such information

    What's in a Cluster? Automatically Detecting Interesting Interactions in Student E-Discussions

    No full text
    Students in classrooms are starting to use visual argumentation tools for e-discussions – a form of debate in which contributions are written into graphical shapes and linked to one another according to whether they, for instance, support or oppose one another. In order to moderate several simultaneous e-discussions effectively, teachers must be alerted regarding events of interest. We focused on the identification of clusters of contributions representing interaction patterns that are of pedagogical interest (e.g., a student clarifies his or her opinion and then gets feedback from other students). We designed an algorithm that takes an example cluster as input and uses inexact graph matching, text analysis, and machine learning classifiers to search for similar patterns in a given corpus. The method was evaluated on an annotated dataset of real e-discussions and was able to detect almost 80% of the annotated clusters while providing acceptable precision performance

    Assessing Relevance With Extensionally Defined Principles and Cases

    No full text
    Expert decision-makers often explain decisions by citing general principles. In some domains, however, it is nearly impossible to define principles intensionally so that they may be applied deductively. After investigating hundreds of professional ethics case opinions, we hypothesized that the decision-makers’ explanations extensionally defined principles over time, in effect, operationalizing them. To model this phenomenon computationally, we constructed SIROCCO, a system for retrieving principles and past cases. This paper presents empirical evidence that operationalization information can be leveraged to predict relevant principles and past cases more accurately than competing approaches that do not use such information

    Helping a CBR Program Know What it Knows

    No full text
    Case-based reasoning systems need to know the limitations of their expertise. Having found the known source cases most relevant to a target problem, they must assess whether those cases are similar enough to the problem to warrant venturing advice. In experimenting with SIROCCO, a twostage case-based retrieval program that uses structural mapping to analyze and provide advice on engineering ethics cases, we concluded that it would sometimes be better for the program to admit that it lacks the knowledge to suggest relevant codes and past source cases. We identified and encoded three strategic metarules to help it decide. The metarules leverage incrementally deeper knowledge about SIROCCO's matching algorithm to help the program "know what it knows." Experiments demonstrate that the metarules can improve the program's overall advice-giving performance

    An AI Investigation of Citation's Epistemological Role

    No full text
    This paper describes how we used an AI model for retrieving ethics cases to investigate empirically the epistemological contributions of a decision-makers' citing cases and code provisions in justifying decisions. In practical ethics, like law, it is impossible to define abstract principles intensionally so that they may be applied deductively. After investigating hundreds of professional ethics case opinions, we hypothesized that the decision-makers’ explanations extensionally defined principles over time, in effect, operationalizing them. We constructed SIROCCO, a system for retrieving principles and past ethics cases. We used this computational model to conduct an ablation experiment concerning a core set of operationalization techniques. This paper presents empirical evidence that the operationalization information supports predictions of the relevant principles and past cases more accurately than competing approaches that do not use such information

    Toward Tutoring Help Seeking; Applying Cognitive Modeling to Meta-Cognitive Skills

    No full text
    The goal of our research is to investigate whether a Cognitive Tutor can be made more effective by extending it to help students acquire help-seeking skills. We present a preliminary model of help-seeking behavior that will provide the basis for a Help-Seeking Tutor Agent. The model, implemented by 57 production rules, captures both productive and unproductive help-seeking behavior. As a first test of the model’s efficacy, we used it off-line to evaluate students’ help-seeking behavior in an existing data set of student-tutor interactions, We found that 72% of all student actions represented unproductive help-seeking behavior. Consistent with some of our earlier work (Aleven & Koedinger, 2000) we found a proliferation of hint abuse (e.g., using hints to find answers rather than trying to understand). We also found that students frequently avoided using help when it was likely to be of benefit and often acted in a quick, possibly undeliberate manner. Students’ help-seeking behavior accounted for as much variance in their learning gains as their performance at the cognitive level (i.e., the errors that they made with the tutor). These findings indicate that the help-seeking model needs to be adjusted, but they also underscore the importance of the educational need that the Help-Seeking Tutor Agent aims to address

    The Cognitive Tutor Authoring Tools (CTAT): Preliminary Evaluation of Efficiency Gains

    No full text
    Intelligent Tutoring Systems have been shown to be effective in a number of domains, but they remain hard to build, with estimates of 200-300 hours of development per hour of instruction. Two goals of the Cognitive Tutor Authoring Tools (CTAT) project are to (a) make tutor development more efficient for both programmers and non-programmers and (b) produce scientific evidence indicating which tool features lead to improved efficiency. CTAT supports development of two types of tutors, Cognitive Tutors and Example- Tracing Tutors, which represent different trade-offs in terms of ease of authoring and generality. In preliminary small-scale controlled experiments involving basic Cognitive Tutor development tasks, we found efficiency gains due to CTAT of 1.4 to 2 times faster. We expect that continued development of CTAT, informed by repeated evaluations involving increasingly complex authoring tasks, will lead to further efficiency gains

    Opening the Door to Non-Programmers: Authoring Intelligent Tutor Behavior by Demonstration

    No full text
    Intelligent tutoring systems are quite difficult and time intensive to develop. In this paper, we describe a method and set of software tools that ease the process of cognitive task analysis and tutor development by allowing the author to demonstrate, instead of programming, the behavior of an intelligent tutor. We focus on the subset of our tools that allow authors to create “Pseudo Tutors” that exhibit the behavior of intelligent tutors without requiring AI programming. Authors build user interfaces by direct manipulation and then use a Behavior Recorder tool to demonstrate alternative correct and incorrect actions. The resulting behavior graph is annotated with instructional messages and knowledge labels. We present some preliminary evidence of the effectiveness of this approach, both in terms of reduced development time and learning outcome. Pseudo Tutors have now been built for economics, analytic logic, mathematics, and language learning. Our data supports an estimate of about 25:1 ratio of development time to instruction time for Pseudo Tutors, which compares favorably to the 200:1 estimate for Intelligent Tutors, though we acknowledge and discuss limitations of such estimates

    Creating Cognitive Tutors for Collaborative Learning: Steps Toward Realization

    No full text
    Our long-term research goal is to provide cognitive tutoring of collaboration within a collaborative software environment. This is a challenging goal, as intelligent tutors have traditionally focused on cognitive skills, rather than on the skills necessary to collaborate successfully. In this paper, we describe progress we have made toward this goal. Our first step was to devise a process known as bootstrapping novice data (BND), in which student problem-solving actions are collected and used to begin the development of a tutor. Next, we implemented BND by integrating a collaborative software tool, Cool Modes, with software designed to develop cognitive tutors (i.e., the Cognitive Tutor Authoring Tools, or CTAT). Our initial implementation of BND provides a means to directly capture data as a foundation for a collaboration tutor but does not yet fully support tutoring. Our next step was to perform two exploratory studies in which dyads of students used our integrated BND software to collaborate in solving modelling tasks. The data collected from these studies led us to identify five dimensions of collaborative and problem-solving behavior that point to the need for abstraction of student actions to better recognize, analyze, and provide feedback on collaboration. We also interviewed a domain expert who provided evidence for the advantage of bootstrapping over manual creation of a collaboration tutor. We discuss plans to use these analyses to inform and extend our tools so that we can eventually reach our goal of tutoring collaboration
    corecore