5,725 research outputs found

    Personalised correction, feedback, and guidance in an automated tutoring system for skills training

    Get PDF
    In addition to knowledge, in various domains skills are equally important. Active learning and training are effective forms of education. We present an automated skills training system for a database programming environment that promotes procedural knowledge acquisition and skills training. The system provides support features such as correction of solutions, feedback and personalised guidance, similar to interactions with a human tutor. Specifically, we address synchronous feedback and guidance based on personalised assessment. Each of these features is automated and includes a level of personalisation and adaptation. At the core of the system is a pattern-based error classification and correction component that analyses student input

    AUTOMATIC GENERATION OF INTELLIGENT TUTORING CAPABILITIES VIA EDUCATIONAL DATA MINING

    Get PDF
    Intelligent Tutoring Systems (ITSs) that adapt to an individual student’s needs have shown significant improvement in achievement over non-adaptive instruction (Murray 1999). This improvement occurs due to the individualized instruction and feedback that an ITS provides. In order to achieve the benefits that ITSs provide, we must find a way to simplify their creation. Therefore, we have created methods that can use data to automatically generate hints to adapt computer-aided instruction to help individual students. Our MDP method uses data from past student attempts on given problem to generate a graph of likely paths students take to solve a problem. These graphs can be used by educators to clearly understand how students are solving the problem or to provide hints for new students working the problem by pointing them down a successful path to solve the problem. We introduce the Hint Factory which is an implementation of the MDP method in an actual tutor used to solve logic proofs. We show that the Hint Factory can successfully help students solve more problems and show that students with access to hints are more likely to attempt harder problems than those without hints. In addition, we have enhanced the MDP method by creating a “utility” function that allows MDPs to be created when the problem solution may not be labeled. We show that this utility function performs as well as the traditional MDP method for our logic problems. We also created a Bayesian Knowledge Base to combine the information from multiple MDPs into a single corpus that will allow the Hint Factory to provide hints on new problems where no student data exists. Finally, we applied the MDP method to create models for other domains, including Stoichiometry and Algebra. This work shows that it is possible to use data to create ITS capabilities, primarily hint generation, automatically in ways that can help students solve more and more difficult problems, and builds a foundation for effective visualization and exploration of student work for both teachers and researchers

    The Hanabi Challenge: A New Frontier for AI Research

    Full text link
    From the early days of computing, games have been important testbeds for studying how well machines can do sophisticated decision making. In recent years, machine learning has made dramatic advances with artificial agents reaching superhuman performance in challenge domains like Go, Atari, and some variants of poker. As with their predecessors of chess, checkers, and backgammon, these game domains have driven research by providing sophisticated yet well-defined challenges for artificial intelligence practitioners. We continue this tradition by proposing the game of Hanabi as a new challenge domain with novel problems that arise from its combination of purely cooperative gameplay with two to five players and imperfect information. In particular, we argue that Hanabi elevates reasoning about the beliefs and intentions of other agents to the foreground. We believe developing novel techniques for such theory of mind reasoning will not only be crucial for success in Hanabi, but also in broader collaborative efforts, especially those with human partners. To facilitate future research, we introduce the open-source Hanabi Learning Environment, propose an experimental framework for the research community to evaluate algorithmic advances, and assess the performance of current state-of-the-art techniques.Comment: 32 pages, 5 figures, In Press (Artificial Intelligence

    Structure learning of graphical models for task-oriented robot grasping

    Get PDF
    In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it

    Information selection and belief updating in hypothesis evaluation

    Get PDF
    This thesis is concerned with the factors underlying both selection and use of evidence in the testing of hypotheses. The work it describes examines the role played in hypothesis evaluation by background knowledge about the probability of events in the environment as well as the influence of more general constraints. Experiments on information choice showed that subjects were sensitive both to explicitly presented probabilistic information and to the likelihood of evidence with regard to background beliefs. It is argued - in contrast with other views in the literature - that subjects' choice of evidence to test hypotheses is rational allowing for certain constraints on subjects' cognitive representations. The majority of experiments in this thesis, however, are focused on the issue of how the information which subjects receive when testing hypotheses affects their beliefs. A major finding is that receipt of early information creates expectations which influence the response to later information. This typically produces a recency effect in which presenting strong evidence after weak evidence affects beliefs more than if the same evidence is presented in the opposite order. These findings run contrary to the view of the belief revision process which is prevalent in the literature in which it is generally assumed that the effects of successive pieces of information are independent. The experiments reported here also provide evidence that processes of selective attention influence evidence interpretation: subjects tend to focus on the most informative part of the evidence and may switch focus from one part of the evidence to another as the task progresses. in some cases, such changes of attention can eliminate the recency effect. In summary, the present research provides new evidence about the role of background beliefs, expectations and cognitive constraints in the selection and use of information to test hypotheses. Several new findings emerge which require revision to current accounts of information integration in the belief revision literature.Faculty of Human Sciences at the University of Plymout

    Towards Personalized Learning using Counterfactual Inference for Randomized Controlled Trials

    Get PDF
    Personalized learning considers that the causal effects of a studied learning intervention may differ for the individual student (e.g., maybe girls do better with video hints while boys do better with text hints). To evaluate a learning intervention inside ASSISTments, we run a randomized control trial (RCT) by randomly assigning students into either a control condition or a treatment condition. Making the inference about causal effects of studies interventions is a central problem. Counterfactual inference answers “What if� questions, such as Would this particular student benefit more if the student were given the video hint instead of the text hint when the student cannot solve a problem? . Counterfactual prediction provides a way to estimate the individual treatment effects and helps us to assign the students to a learning intervention which leads to a better learning. A variant of Michael Jordan\u27s Residual Transfer Networks was proposed for the counterfactual inference. The model first uses feed-forward neural networks to learn a balancing representation of students by minimizing the distance between the distributions of the control and the treated populations, and then adopts a residual block to estimate the individual treatment effect. Students in the RCT usually have done a number of problems prior to participating it. Each student has a sequence of actions (performance sequence). We proposed a pipeline to use the performance sequence to improve the performance of counterfactual inference. Since deep learning has achieved a huge amount of success in learning representations from raw logged data, student representations were learned by applying the sequence autoencoder to performance sequences. Then, incorporate these representations into the model for counterfactual inference. Empirical results showed that the representations learned from the sequence autoencoder improved the performance of counterfactual inference
    corecore