41 research outputs found
Neural Learning of Vector Fields for Encoding Stable Dynamical Systems
Lemme A, Reinhart F, Neumann K, Steil JJ. Neural Learning of Vector Fields for Encoding Stable Dynamical Systems. Neurocomputing. 2014;141:3-14
Incremental bootstrapping of parameterized motor skills
Many motor skills have an intrinsic, low-dimensional parameterization, e.g. reaching through a grid to different targets. Repeated policy search for new parameterizations of such a skill is inefficient, because the structure of the skill variability is not exploited. This issue has been previously addressed by learning mappings from task parameters to policy parameters. In this work, we introduce a bootstrapping technique that establishes such parameterized skills incrementally. The approach combines iterative learning with state-of-the-art black-box policy optimization. We investigate the benefits of incrementally learning parameterized skills for efficient policy retrieval and show that the number of required rollouts can be significantly reduced when optimizing policies for novel tasks. The approach is demonstrated for several parameterized motor tasks including upper-body reaching motion generation for the humanoid robot COMAN
Time series classification in reservoir- and model-space
We evaluate two approaches for time series classification based on reservoir computing. In the first, classical approach, time series are represented by reservoir activations. In the second approach, on top of the reservoir activations, a predictive model in the form of a readout for one-step-ahead-prediction is trained for each time series. This learning step lifts the reservoir features to a more sophisticated model space. Classification is then based on the predictive model parameters describing each time series. We provide an in-depth analysis on time series classification in reservoir- and model-space. The approaches are evaluated on 43 univariate and 18 multivariate time series. The results show that representing multivariate time series in the model space leads to lower classification errors compared to using the reservoir activations directly as features. The classification accuracy on the univariate datasets can be improved by combining reservoir- and model-space
Modelling of parametrized processes via regression in the model space of neural networks
We consider the modelling of parametrized processes, where the goal is to model the process for new parameter value combinations. We compare the classical regression approach to a modular approach based on regression in the model space: First, for each process parametrization a model is learned. Second, a mapping from process parameters to model parameters is learned. We evaluate both approaches on two synthetic and two real-world data sets and show the advantages of the regression in the model space
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms
Hyperarticulation aids learning of new vowels in a developmental speech acquisition model
Many studies emphasize the importance of infant-directed speech: stronger articulated, higher-quality speech helps infants to better distinguish different speech sounds. This effect has been widely investigated in terms of the infant's perceptual capabilities, but few studies examined whether infant-directed speech has an effect on articulatory learning. In earlier studies, we developed a model that learns articulatory control for a 3D vocal tract model via goal babbling. Exploration is organized in the space of outcomes. This so called goal space is generated from a set of ambient speech sounds. Similarly to how speech from the environment shapes infant's speech perception, the data from which the goal space is learned shapes the later learning process: it determines which sounds the model is able to discriminate, and thus, which sounds it can eventually learn to produce. We investigate how speech sound quality in early learning affects the model's capability to learn new vowel sounds. The model is trained either on hyperarticulated (tense) or on hypoarticulated (lax) vowels. Then we retrain the model with vowels from the other set. Results show that new vowels can be acquired although they were not included in early learning. There is, however, an effect of learning order, showing that models first trained on the stronger articulated tense vowels easier accommodate to new vowel sounds later on