34,524 research outputs found

    Learning with a probabilistic teacher

    Get PDF
    Learning scheme for solving unsupervised learning problems with correct estimate convergence and for state estimates of Gauss-Markov sequences with additive and multiplicative observed nois

    Time-Dependent Performance Prediction System for Early Insight in Learning Trends

    Get PDF
    Performance prediction systems allow knowing the learning status of students during a term and produce estimations on future status, what is invaluable information for teachers. The majority of current systems statically classify students once in time and show results in simple visual modes. This paper presents an innovative system with progressive, time-dependent and probabilistic performance predictions. The system produces by-weekly probabilistic classifications of students in three groups: high, medium or low performance. The system is empirically tested and data is gathered, analysed and presented. Predictions are shown as point graphs over time, along with calculated learning trends. Summary blocks are with latest predictions and trends are also provided for teacher efficiency. Moreover, some methods for selecting best moments for teacher intervention are derived from predictions. Evidence gathered shows potential to give teachers insights on students' learning trends, early diagnose learning status and selecting best moment for intervention

    Time-Dependent Performance Prediction System for Early Insight in Learning Trends

    Get PDF
    Performance prediction systems allow knowing the learning status of students during a term and produce estimations on future status, what is invaluable information for teachers. The majority of current systems statically classify students once in time and show results in simple visual modes. This paper presents an innovative system with progressive, time-dependent and probabilistic performance predictions. The system produces by-weekly probabilistic classifications of students in three groups: high, medium or low performance. The system is empirically tested and data is gathered, analysed and presented. Predictions are shown as point graphs over time, along with calculated learning trends. Summary blocks are with latest predictions and trends are also provided for teacher efficiency. Moreover, some methods for selecting best moments for teacher intervention are derived from predictions. Evidence gathered shows potential to give teachers insights on students' learning trends, early diagnose learning status and selecting best moment for intervention

    Does Feedback-Related Brain Response during Reinforcement Learning Predict Socio-motivational (In-)dependence in Adolescence?

    Get PDF
    This multi-methodological study applied functional magnetic resonance imaging to investigate neural activation in a group of adolescent students (N = 88) during a probabilistic reinforcement learning task. We related patterns of emerging brain activity and individual learning rates to socio-motivational (in-)dependence manifested in four different motivation types (MTs): (1) peer-dependent MT, (2) teacher-dependent MT, (3) peer-and-teacher-dependent MT, (4) peer-and-teacher-independent MT. A multinomial regression analysis revealed that the individual learning rate predicts students’ membership to the independent MT, or the peer-and-teacher-dependent MT. Additionally, the striatum, a brain region associated with behavioral adaptation and flexibility, showed increased learning-related activation in students with motivational independence. Moreover, the prefrontal cortex, which is involved in behavioral control, was more active in students of the peer-and-teacher-dependent MT. Overall, this study offers new insights into the interplay of motivation and learning with (1) a focus on inter-individual differences in the role of peers and teachers as source of students’ individual motivation and (2) its potential neurobiological basis

    Learning Probabilistic Systems from Tree Samples

    Full text link
    We consider the problem of learning a non-deterministic probabilistic system consistent with a given finite set of positive and negative tree samples. Consistency is defined with respect to strong simulation conformance. We propose learning algorithms that use traditional and a new "stochastic" state-space partitioning, the latter resulting in the minimum number of states. We then use them to solve the problem of "active learning", that uses a knowledgeable teacher to generate samples as counterexamples to simulation equivalence queries. We show that the problem is undecidable in general, but that it becomes decidable under a suitable condition on the teacher which comes naturally from the way samples are generated from failed simulation checks. The latter problem is shown to be undecidable if we impose an additional condition on the learner to always conjecture a "minimum state" hypothesis. We therefore propose a semi-algorithm using stochastic partitions. Finally, we apply the proposed (semi-) algorithms to infer intermediate assumptions in an automated assume-guarantee verification framework for probabilistic systems.Comment: 14 pages, conference paper with full proof

    Gaussian-Process-based Robot Learning from Demonstration

    Full text link
    Endowed with higher levels of autonomy, robots are required to perform increasingly complex manipulation tasks. Learning from demonstration is arising as a promising paradigm for transferring skills to robots. It allows to implicitly learn task constraints from observing the motion executed by a human teacher, which can enable adaptive behavior. We present a novel Gaussian-Process-based learning from demonstration approach. This probabilistic representation allows to generalize over multiple demonstrations, and encode variability along the different phases of the task. In this paper, we address how Gaussian Processes can be used to effectively learn a policy from trajectories in task space. We also present a method to efficiently adapt the policy to fulfill new requirements, and to modulate the robot behavior as a function of task variability. This approach is illustrated through a real-world application using the TIAGo robot.Comment: 8 pages, 10 figure

    Active Learning of Probabilistic Movement Primitives

    Full text link
    A Probabilistic Movement Primitive (ProMP) defines a distribution over trajectories with an associated feedback policy. ProMPs are typically initialized from human demonstrations and achieve task generalization through probabilistic operations. However, there is currently no principled guidance in the literature to determine how many demonstrations a teacher should provide and what constitutes a "good'" demonstration for promoting generalization. In this paper, we present an active learning approach to learning a library of ProMPs capable of task generalization over a given space. We utilize uncertainty sampling techniques to generate a task instance for which a teacher should provide a demonstration. The provided demonstration is incorporated into an existing ProMP if possible, or a new ProMP is created from the demonstration if it is determined that it is too dissimilar from existing demonstrations. We provide a qualitative comparison between common active learning metrics; motivated by this comparison we present a novel uncertainty sampling approach named "Greatest Mahalanobis Distance.'' We perform grasping experiments on a real KUKA robot and show our novel active learning measure achieves better task generalization with fewer demonstrations than a random sampling over the space.Comment: Under revie
    corecore