53,840 research outputs found

    The role of frontal cortical-basal ganglia circuits in simple and sequential visuomotor learning

    Get PDF
    Imaging, recording and lesioning studies implicate the basal ganglia and anatomically related regions of frontal cortex in visuomotor learning. Two experiments were conducted to elucidate the role of frontal cortex and striatum in visuomotor learning. Several tasks were used to characterize motor function including: a visuomotor reaction time (VSRT) task, measuring response speed and accuracy to luminance cues; simple stimulus-response (S-R) learning, measuring VSRT improvements when cues occurred in consistent locations over several trials; and a serial reaction time (SRT) task measuring motor sequence learning. SRT learning was characterized by incremental changes in reaction time (RT) when trained with the same sequence across daily sessions and by abrupt RT changes when switched to random sequence sessions. In experiment 1, rats with excitotoxic lesions in primary (M1) or secondary (M2) motor cortex, primary and secondary (M1M2) motor cortices, medial prefrontal cortex (mPF) or sham surgery were tested on these tasks. Cortical lesions slowed RT in the VSRT task but did not impair short- or long-term simple S-R learning. Cortical lesions increased RTs for the initial response of a 5-response sequence in the SRT task that was exacerbated when performing repeated (learned) sequences. All groups demonstrated visuomotor sequence learning including incremental changes in RTs for later responses in learned sequences that reversed abruptly when switched to random sequences. Rats in experiment 2 were given lesions in dorsolateral striatum, dorsomedial striatum, complete dorsal striatum, ventral striatum and sham surgery. Rats with ventral striatal lesions were unimpaired on any visuomotor task demonstrating shorter RTs than controls on most measures. Dorsomedial striatal lesions significantly impaired all VSRT performance measures. Striatal lesions had no effect on short or long-term simple S-R learning. Lesions involving dorsomedial striatum disrupted initiation of motor sequences in the SRT task. This impairment was exaggerated when performing well-learned sequences. Striatal lesions did not disrupt the incremental RT changes of later responses in the sequence indicative of motor learning. Results suggest that cortico-striatal circuits are involved in initiating learned motor sequences consistent with a role in motor planning. These circuits do not appear essential for acquisition or execution of learned visuomotor sequences

    Semi-supervised Tuning from Temporal Coherence

    Full text link
    Recent works demonstrated the usefulness of temporal coherence to regularize supervised training or to learn invariant features with deep architectures. In particular, enforcing smooth output changes while presenting temporally-closed frames from video sequences, proved to be an effective strategy. In this paper we prove the efficacy of temporal coherence for semi-supervised incremental tuning. We show that a deep architecture, just mildly trained in a supervised manner, can progressively improve its classification accuracy, if exposed to video sequences of unlabeled data. The extent to which, in some cases, a semi-supervised tuning allows to improve classification accuracy (approaching the supervised one) is somewhat surprising. A number of control experiments pointed out the fundamental role of temporal coherence.Comment: Under review as a conference paper at ICLR 201

    Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

    Get PDF
    Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenario

    SOVEREIGN: A Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-Oriented Navigation System

    Full text link
    Both animals and mobile robots, or animats, need adaptive control systems to guide their movements through a novel environment. Such control systems need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once the environment is familiar. How reactive and planned behaviors interact together in real time, and arc released at the appropriate times, during autonomous navigation remains a major unsolved problern. This work presents an end-to-end model to address this problem, named SOVEREIGN: A Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation system. The model comprises several interacting subsystems, governed by systems of nonlinear differential equations. As the animat explores the environment, a vision module processes visual inputs using networks that arc sensitive to visual form and motion. Targets processed within the visual form system arc categorized by real-time incremental learning. Simultaneously, visual target position is computed with respect to the animat's body. Estimates of target position activate a motor system to initiate approach movements toward the target. Motion cues from animat locomotion can elicit orienting head or camera movements to bring a never target into view. Approach and orienting movements arc alternately performed during animat navigation. Cumulative estimates of each movement, based on both visual and proprioceptive cues, arc stored within a motor working memory. Sensory cues are stored in a parallel sensory working memory. These working memories trigger learning of sensory and motor sequence chunks, which together control planned movements. Effective chunk combinations arc selectively enhanced via reinforcement learning when the animat is rewarded. The planning chunks effect a gradual transition from reactive to planned behavior. The model can read-out different motor sequences under different motivational states and learns more efficient paths to rewarded goals as exploration proceeds. Several volitional signals automatically gate the interactions between model subsystems at appropriate times. A 3-D visual simulation environment reproduces the animat's sensory experiences as it moves through a simplified spatial environment. The SOVEREIGN model exhibits robust goal-oriented learning of sequential motor behaviors. Its biomimctic structure explicates a number of brain processes which are involved in spatial navigation.Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0225, F49620-01-1-0397); National Science Foundation (IRI 90-24877, SBE-0354378); Office of Naval Research (N00014-91-J-4100, N00014-92-J-1309, N00014-95-1-0657, N00014-01-1-0624); Pacific Sierra Research (PSR 91-6075-2

    Models of probabilistic category learning in Parkinson's disease: Strategy use and the effects of L-dopa

    Get PDF
    Probabilistic category learning (PCL) has become an increasingly popular paradigm to study the brain bases of learning and memory. It has been argued that PCL relies on procedural habit learning, which is impaired in Parkinson's disease (PD). However, as PD patients were typically tested under medication, it is possible that levodopa (L-dopa) caused impaired performance in PCL. We present formal models of rule-based strategy switching in PCL, to re-analyse the data from [Jahanshahi, M., Wilkinson, L, Gahir, H., Dharminda, A., & Lagnado, D.A., (2009). Medication impairs probabilistic classification learning in Parkinson's disease. Manuscript submitted for publication] comparing PD patients on and off medication (within subjects) to matched controls. Our analysis shows that PD patients followed a similar strategy switch process as controls when off medication, but not when on medication. On medication, PD patients mainly followed a random guessing strategy, with only few switching to the better Single Cue strategies. PD patients on medication and controls made more use of the optimal Multi-Cue strategy. In addition, while controls and PD patients off medication only switched to strategies which did not decrease performance, strategy switches of PD patients on medication were not always directed as such. Finally, results indicated that PD patients on medication responded according to a probability matching strategy indicative of associative learning, while the behaviour of PD patients off medication and controls was consistent with a rule-based hypothesis testing procedure. (C) 2009 Elsevier Inc. All rights reserved

    Online Metric-Weighted Linear Representations for Robust Visual Tracking

    Full text link
    In this paper, we propose a visual tracker based on a metric-weighted linear representation of appearance. In order to capture the interdependence of different feature dimensions, we develop two online distance metric learning methods using proximity comparison information and structured output learning. The learned metric is then incorporated into a linear representation of appearance. We show that online distance metric learning significantly improves the robustness of the tracker, especially on those sequences exhibiting drastic appearance changes. In order to bound growth in the number of training samples, we design a time-weighted reservoir sampling method. Moreover, we enable our tracker to automatically perform object identification during the process of object tracking, by introducing a collection of static template samples belonging to several object classes of interest. Object identification results for an entire video sequence are achieved by systematically combining the tracking information and visual recognition at each frame. Experimental results on challenging video sequences demonstrate the effectiveness of the method for both inter-frame tracking and object identification.Comment: 51 pages. Appearing in IEEE Transactions on Pattern Analysis and Machine Intelligenc
    • …
    corecore