159,846 research outputs found
CASSL: Curriculum Accelerated Self-Supervised Learning
Recent self-supervised learning approaches focus on using a few thousand data
points to learn policies for high-level, low-dimensional action spaces.
However, scaling this framework for high-dimensional control require either
scaling up the data collection efforts or using a clever sampling strategy for
training. We present a novel approach - Curriculum Accelerated Self-Supervised
Learning (CASSL) - to train policies that map visual information to high-level,
higher- dimensional action spaces. CASSL orders the sampling of training data
based on control dimensions: the learning and sampling are focused on few
control parameters before other parameters. The right curriculum for learning
is suggested by variance-based global sensitivity analysis of the control
space. We apply our CASSL framework to learning how to grasp using an adaptive,
underactuated multi-fingered gripper, a challenging system to control. Our
experimental results indicate that CASSL provides significant improvement and
generalization compared to baseline methods such as staged curriculum learning
(8% increase) and complete end-to-end learning with random exploration (14%
improvement) tested on a set of novel objects
Vector Associative Maps: Unsupervised Real-time Error-based Learning and Control of Movement Trajectories
This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.National Science Foundation (IRI-87-16960, IRI-87-6960); Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083
A Self-Regulated Learning Approach to Educational Recommender Design
Recommender systems, or recommenders, are information filtering systems prevalent today in many fields. One type of recommender found in the field of education, the educational recommender, is a key component of adaptive learning solutions as these systems avoid “one-size-fits-all” approaches by tailoring the learning process to the needs of individual learners. To function, these systems utilize learning analytics in a student-facing manner.
While existing research has shown promise and explores a variety of types of educational recommenders, there is currently a lack of research that ties educational theory to the design and implementation of these systems. The theory considered here, self-regulated learning, is underexplored in educational recommender research. Self-regulated learning advocates a cyclical feedback loop that focuses on putting students in control of their learning with consideration for activities such as goal setting, selection of learning strategies, and monitoring of one’s performance.
The goal of this research is to explore how best to build a self-regulated learning guided educational recommender and discover its influence on academic success. This research applies a design science methodology in the creation of a novel educational recommender framework with a theoretical base in self-regulated learning. Guided by existing research, it advocates for a hybrid recommender approach consisting of knowledge-based and collaborative filtering, made possible by supporting ontologies that represent the learner, learning objects, and learner actions. This research also incorporates existing Information Systems (IS) theory in the evaluation, drawing further connections between these systems and the field of IS. The self-regulated learning-based recommender framework is evaluated in a higher education environment via a web-based demonstration in several case study instances using mixed-method analysis to determine this approach’s fit and perceived impact on academic success. Results indicate that the self-regulated learning-based approach demonstrated a technology fit that was positively related to student academic performance while student comments illuminated many advantages to this approach, such as its ability to focus and support various studying efforts. In addition to
contributing to the field of IS research by delivering an innovative framework and demonstration, this research also results in self-regulated learning-based educational recommender design principles that serve to guide both future researchers and practitioners in IS
and education
ADD: Analytically Differentiable Dynamics for Multi-Body Systems with Frictional Contact
We present a differentiable dynamics solver that is able to handle frictional
contact for rigid and deformable objects within a unified framework. Through a
principled mollification of normal and tangential contact forces, our method
circumvents the main difficulties inherent to the non-smooth nature of
frictional contact. We combine this new contact model with fully-implicit time
integration to obtain a robust and efficient dynamics solver that is
analytically differentiable. In conjunction with adjoint sensitivity analysis,
our formulation enables gradient-based optimization with adaptive trade-offs
between simulation accuracy and smoothness of objective function landscapes. We
thoroughly analyse our approach on a set of simulation examples involving rigid
bodies, visco-elastic materials, and coupled multi-body systems. We furthermore
showcase applications of our differentiable simulator to parameter estimation
for deformable objects, motion planning for robotic manipulation, trajectory
optimization for compliant walking robots, as well as efficient self-supervised
learning of control policies.Comment: Moritz Geilinger and David Hahn contributed equally to this wor
Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings
We present an optimised multi-modal dialogue agent for interactive learning
of visually grounded word meanings from a human tutor, trained on real
human-human tutoring data. Within a life-long interactive learning period, the
agent, trained using Reinforcement Learning (RL), must be able to handle
natural conversations with human users and achieve good learning performance
(accuracy) while minimising human effort in the learning process. We train and
evaluate this system in interaction with a simulated human tutor, which is
built on the BURCHAK corpus -- a Human-Human Dialogue dataset for the visual
learning task. The results show that: 1) The learned policy can coherently
interact with the simulated user to achieve the goal of the task (i.e. learning
visual attributes of objects, e.g. colour and shape); and 2) it finds a better
trade-off between classifier accuracy and tutoring costs than hand-crafted
rule-based policies, including ones with dynamic policies.Comment: 10 pages, RoboNLP Workshop from ACL Conferenc
Recommended from our members
Designing for change: mash-up personal learning environments
Institutions for formal education and most work places are equipped today with at least some kind of tools that bring together people and content artefacts in learning activities to support them in constructing and processing information and knowledge. For almost half a century, science and practice have been discussing models on how to bring personalisation through digital means to these environments. Learning environments and their construction as well as maintenance makes up the most crucial part of the learning process and the desired learning outcomes and theories should take this into account. Instruction itself as the predominant paradigm has to step down.
The learning environment is an (if not 'the�) important outcome of a learning process, not just a stage to perform a 'learning play'. For these good reasons, we therefore consider instructional design theories to be flawed.
In this article we first clarify key concepts and assumptions for personalised learning environments. Afterwards, we summarise our critique on the contemporary models for personalised adaptive learning. Subsequently, we propose our alternative, i.e. the concept of a mash-up personal learning environment that provides adaptation mechanisms for learning environment construction and maintenance. The web application mash-up solution allows learners to reuse existing (web-based) tools plus services.
Our alternative, LISL is a design language model for creating, managing, maintaining, and learning about learning environment design; it is complemented by a proof of concept, the MUPPLE platform. We demonstrate this approach with a prototypical implementation and a – we think – comprehensible example. Finally, we round up the article with a discussion on possible extensions of this new model and open problems
- …