213,045 research outputs found

    Local online learning of coherent information

    Get PDF
    One of the goals of perception is to learn to respond to coherence across space, time and modality. Here we present an abstract framework for the local online unsupervised learning of this coherent information using multi-stream neural networks. The processing units distinguish between feedforward inputs projected from the environment and the lateral, contextual inputs projected from the processing units of other streams. The contextual inputs are used to guide learning towards coherent cross-stream structure. The goal of all the learning algorithms described is to maximize the predictability between each unit output and its context. Many local cost functions may be applied: e.g. mutual information, relative entropy, squared error and covariance. Theoretical and simulation results indicate that, of these, the covariance rule (1) is the only rule that specifically links and learns only those streams with coherent information, (2) can be robustly approximated by a Hebbian rule, (3) is stable with input noise, no pairwise input correlations, and in the discovery of locally less informative components that are coherent globally. In accordance with the parallel nature of the biological substrate, we also show that all the rules scale up with the number of streams

    Secondary curriculum review statutory consultation: draft summary of findings

    Get PDF

    Lifelong guidance policy and practice in the EU

    Get PDF
    A study on lifelong guidance (LLG) policy and practice in the EU focusing on trends, challenges and opportunities. Lifelong guidance aims to provide career development support for individuals of all ages, at all career stages. It includes careers information, advice, counselling, assessment of skills and mentoring

    Using technology to support the 14-19 agenda

    Get PDF

    On the convergence of mirror descent beyond stochastic convex programming

    Get PDF
    In this paper, we examine the convergence of mirror descent in a class of stochastic optimization problems that are not necessarily convex (or even quasi-convex), and which we call variationally coherent. Since the standard technique of "ergodic averaging" offers no tangible benefits beyond convex programming, we focus directly on the algorithm's last generated sample (its "last iterate"), and we show that it converges with probabiility 11 if the underlying problem is coherent. We further consider a localized version of variational coherence which ensures local convergence of stochastic mirror descent (SMD) with high probability. These results contribute to the landscape of non-convex stochastic optimization by showing that (quasi-)convexity is not essential for convergence to a global minimum: rather, variational coherence, a much weaker requirement, suffices. Finally, building on the above, we reveal an interesting insight regarding the convergence speed of SMD: in problems with sharp minima (such as generic linear programs or concave minimization problems), SMD reaches a minimum point in a finite number of steps (a.s.), even in the presence of persistent gradient noise. This result is to be contrasted with existing black-box convergence rate estimates that are only asymptotic.Comment: 30 pages, 5 figure

    Unsupervised state representation learning with robotic priors: a robustness benchmark

    Full text link
    Our understanding of the world depends highly on our capacity to produce intuitive and simplified representations which can be easily used to solve problems. We reproduce this simplification process using a neural network to build a low dimensional state representation of the world from images acquired by a robot. As in Jonschkowski et al. 2015, we learn in an unsupervised way using prior knowledge about the world as loss functions called robotic priors and extend this approach to high dimension richer images to learn a 3D representation of the hand position of a robot from RGB images. We propose a quantitative evaluation of the learned representation using nearest neighbors in the state space that allows to assess its quality and show both the potential and limitations of robotic priors in realistic environments. We augment image size, add distractors and domain randomization, all crucial components to achieve transfer learning to real robots. Finally, we also contribute a new prior to improve the robustness of the representation. The applications of such low dimensional state representation range from easing reinforcement learning (RL) and knowledge transfer across tasks, to facilitating learning from raw data with more efficient and compact high level representations. The results show that the robotic prior approach is able to extract high level representation as the 3D position of an arm and organize it into a compact and coherent space of states in a challenging dataset.Comment: ICRA 2018 submissio
    • …
    corecore