83,768 research outputs found

    Reinforced Continual Learning

    Full text link
    Most artificial intelligence models have limiting ability to solve new tasks faster, without forgetting previously acquired knowledge. The recently emerging paradigm of continual learning aims to solve this issue, in which the model learns various tasks in a sequential fashion. In this work, a novel approach for continual learning is proposed, which searches for the best neural architecture for each coming task via sophisticatedly designed reinforcement learning strategies. We name it as Reinforced Continual Learning. Our method not only has good performance on preventing catastrophic forgetting but also fits new tasks well. The experiments on sequential classification tasks for variants of MNIST and CIFAR-100 datasets demonstrate that the proposed approach outperforms existing continual learning alternatives for deep networks

    Relational Neurogenesis for Lifelong Learning Agents

    Get PDF
    Reinforcement learning systems have shown tremendous potential in being able to model meritorious behavior in virtual agents and robots. The ability to learn through continuous reinforcement and interaction with an environment negates the requirement of painstakingly curated datasets and hand crafted features. However, the ability to learn multiple tasks in a sequential manner, referred to as lifelong or continual learning, remains unresolved. The search for lifelong learning algorithms creates the foundation for this work. While there has been much research conducted in supervised learning domains under lifelong learning, the reinforced lifelong learning domain remains open for much exploration. Furthermore, current implementations either concentrate on preserving information in fixed capacity networks, or propose incrementally growing networks which randomly search through an unconstrained solution space. In order to develop a comprehensive lifelong learning algorithm, it seems essential to amalgamate these approaches into a condensed algorithm which can perform both neuroevolution and constrict network growth automatically. This thesis proposes a novel algorithm for continual learning using neurogenesis in reinforcement learning agents. It builds upon existing neuroevolutionary techniques, and incorporates several new mechanisms for limiting the memory resources while expanding neural network learning capacity. The algorithm is tested on a custom set of sequential virtual environments which emulate several meaningful scenarios for intellectually down-scaled species and autonomous robots. Additionally, a library for connecting an unconstrained range of machine learning tools, in a variety of programming languages to the Unity3D simulation engine for the development of future learning algorithms and environments, is also proposed

    Preparing Aspiring Superintendents to Lead School Improvement: Perceptions of Graduates for Program Development

    Get PDF
    Changes in the design and delivery of educational leadership preparation programs are advocated in order to meet the needs of leadership for 21st century schools (Byrd, 2001; Cox, 2002; McKerrow, 1998; Smylie & Bennett, 2005). The changing needs of the 21st century, coupled with accountability standards and more diverse populations of students within school districts, create challenges for leaders who are attempting to increase student achievement (Firestone & Shipps, 2005; Schlechty, 2008). Further, student performance demands have increased at the state and national level because of the No Child Left Behind Act (Wong & Nicotera, 2007). These standards have thus increased the emphasis of the administrator\u27s responsibility to positively impact student achievement (Taylor, 2001). With the graying of the profession and the need for exemplary school superintendents, the preparation of school superintendents who can successfully lead school improvement is vitally important (Lashway, 2006). According to the National Council for the Accreditation of Teacher Education (NCATE, 2002), university preparation programs should seek current leaders\u27 perspectives of critical content components and the processes to be used in the preparation of educational leaders who can lead school improvement practices and processes

    A Cognitive Science Based Machine Learning Architecture

    Get PDF
    In an attempt to illustrate the application of cognitive science principles to hard AI problems in machine learning we propose the LIDA technology, a cognitive science based architecture capable of more human-like learning. A LIDA based software agent or cognitive robot will be capable of three fundamental, continuously active, humanlike learning mechanisms:\ud 1) perceptual learning, the learning of new objects, categories, relations, etc.,\ud 2) episodic learning of events, the what, where, and when,\ud 3) procedural learning, the learning of new actions and action sequences with which to accomplish new tasks. The paper argues for the use of modular components, each specializing in implementing individual facets of human and animal cognition, as a viable approach towards achieving general intelligence
    • …
    corecore