3 research outputs found

    How do people learn how to plan?

    No full text
    How does the brain learn how to plan? We reverse-engineer people's underlying learning mechanisms by combining rational process models of cognitive plasticity with recently developed empirical methods that allow us to trace the temporal evolution of people's planning strategies. We find that our Learned Value of Computation model (LVOC) accurately captures people's average learning curve. However, there were also substantial individual differences in metacognitive learning that are best understood in terms of multiple different learning mechanisms -- including strategy selection learning. Furthermore, we observed that LVOC could not fully capture people's ability to adaptively decide when to stop planning. We successfully extended the LVOC model to address these discrepancies. Our models broadly capture people's ability to improve their decision mechanisms and represent a significant step towards reverse-engineering how the brain learns increasingly more effective cognitive strategies through its interaction with the environment

    The Profile of Students’ Questions Based on Revised Bloom’s Taxonomy on Life Organizations System Topic

    Get PDF
    The aim of this study to determine student questions based on Revised Bloom's Taxonomy. This study was done by quantitative approach with descriptive research method. Research data were obtained through analysis of student questions sheets and interviews. Guide the student’s question were analysid sheet was used to record student questions while learning in class. Based on the results of the analysis it is known that 20% of students' questions at the cognitive domain level C1 (Remembering), 37% at the C2 level (Understanding), 20% at the C3 level (Applying), 11,5% at the C4 level (Analyzing), 11,5% at the C5 level (Evaluating), and 0% at the C6 level (Creating). While based on the dimension of knowledge, factual level questions were asked by 26% of students, conceptual level questions by 74%, and there was no question in procedural and metacognitive level

    Data-driven Metareasoning for Collaborative Autonomous Systems

    Get PDF
    When coordinating their actions to accomplish a mission, the agents in a multi-agent system may use a collaboration algorithm to determine which agent performs which task. This paper describes a novel data-driven metareasoning approach that generates a metareasoning policy that the agents can use whenever they must collaborate to assign tasks. This metareasoning approach collects data about the performance of the algorithms at many decision points and uses this data to train a set of surrogate models that can estimate the expected performance of different algorithms. This yields a metareasoning policy that, based on the current state of the system, estimated the algorithms’ expected performance and chose the best one. For a ship protection scenario, computational results show that one version of the metareasoning policy performed as well as the best component algorithm but required less computational effort. The proposed data-driven metareasoning approach could be a promising tool for developing policies to control multi-agent autonomous systems.This work was supported in part by the U.S. Naval Air Warfare Center-Aircraft Division
    corecore