130,205 research outputs found

    Towards Teaching a Robot to Count Objects

    Get PDF
    We present here an example of incremental learning between two computational models dealing with different modalities: a model allowing to switch spatial visual attention and a model allowing to learn the ordinal sequence of phonetical numbers. Their merging via a common reward signal allows anyway to produce a cardinal counting behaviour that can be implemented on a robot

    Minimax Forward and Backward Learning of Evolving Tasks with Performance Guarantees

    Full text link
    For a sequence of classification tasks that arrive over time, it is common that tasks are evolving in the sense that consecutive tasks often have a higher similarity. The incremental learning of a growing sequence of tasks holds promise to enable accurate classification even with few samples per task by leveraging information from all the tasks in the sequence (forward and backward learning). However, existing techniques developed for continual learning and concept drift adaptation are either designed for tasks with time-independent similarities or only aim to learn the last task in the sequence. This paper presents incremental minimax risk classifiers (IMRCs) that effectively exploit forward and backward learning and account for evolving tasks. In addition, we analytically characterize the performance improvement provided by forward and backward learning in terms of the tasks' expected quadratic change and the number of tasks. The experimental evaluation shows that IMRCs can result in a significant performance improvement, especially for reduced sample sizes

    Logistic regression models to predict solvent accessible residues using sequence- and homology-based qualitative and quantitative descriptors applied to a domain-complete X-ray structure learning set

    Get PDF
    A working example of relative solvent accessibility (RSA) prediction for proteins is presented. Novel logistic regression models with various qualitative descriptors that include amino acid type and quantitative descriptors that include 20- and six-term sequence entropy have been built and validated. A domain-complete learning set of over 1300 proteins is used to fit initial models with various sequence homology descriptors as well as query residue qualitative descriptors. Homology descriptors are derived from BLASTp sequence alignments, whereas the RSA values are determined directly from the crystal structure. The logistic regression models are fitted using dichotomous responses indicating buried or accessible solvent, with binary classifications obtained from the RSA values. The fitted models determine binary predictions of residue solvent accessibility with accuracies comparable to other less computationally intensive methods using the standard RSA threshold criteria 20 and 25% as solvent accessible. When an additional non-homology descriptor describing Lobanov–Galzitskaya residue disorder propensity is included, incremental improvements in accuracy are achieved with 25% threshold accuracies of 76.12 and 74.45% for the Manesh-215 and CASP(8+9) test sets, respectively. Moreover, the described software and the accompanying learning and validation sets allow students and researchers to explore the utility of RSA prediction with simple, physically intuitive models in any number of related applications

    The role of frontal cortical-basal ganglia circuits in simple and sequential visuomotor learning

    Get PDF
    Imaging, recording and lesioning studies implicate the basal ganglia and anatomically related regions of frontal cortex in visuomotor learning. Two experiments were conducted to elucidate the role of frontal cortex and striatum in visuomotor learning. Several tasks were used to characterize motor function including: a visuomotor reaction time (VSRT) task, measuring response speed and accuracy to luminance cues; simple stimulus-response (S-R) learning, measuring VSRT improvements when cues occurred in consistent locations over several trials; and a serial reaction time (SRT) task measuring motor sequence learning. SRT learning was characterized by incremental changes in reaction time (RT) when trained with the same sequence across daily sessions and by abrupt RT changes when switched to random sequence sessions. In experiment 1, rats with excitotoxic lesions in primary (M1) or secondary (M2) motor cortex, primary and secondary (M1M2) motor cortices, medial prefrontal cortex (mPF) or sham surgery were tested on these tasks. Cortical lesions slowed RT in the VSRT task but did not impair short- or long-term simple S-R learning. Cortical lesions increased RTs for the initial response of a 5-response sequence in the SRT task that was exacerbated when performing repeated (learned) sequences. All groups demonstrated visuomotor sequence learning including incremental changes in RTs for later responses in learned sequences that reversed abruptly when switched to random sequences. Rats in experiment 2 were given lesions in dorsolateral striatum, dorsomedial striatum, complete dorsal striatum, ventral striatum and sham surgery. Rats with ventral striatal lesions were unimpaired on any visuomotor task demonstrating shorter RTs than controls on most measures. Dorsomedial striatal lesions significantly impaired all VSRT performance measures. Striatal lesions had no effect on short or long-term simple S-R learning. Lesions involving dorsomedial striatum disrupted initiation of motor sequences in the SRT task. This impairment was exaggerated when performing well-learned sequences. Striatal lesions did not disrupt the incremental RT changes of later responses in the sequence indicative of motor learning. Results suggest that cortico-striatal circuits are involved in initiating learned motor sequences consistent with a role in motor planning. These circuits do not appear essential for acquisition or execution of learned visuomotor sequences

    Analysis of the impact of class ordering in class incremental image classification

    Get PDF
    The benefits of incremental learning make it desirable for many real-world applications. It enables efficient utilization of resources by eliminating the need to start training from scratch when the considered set of tasks is updated. Additionally, it reduces memory usage, which is particularly important in situations where privacy limitations exist, such as in the healthcare sector where storing patient data for a long time is prohibited. However, the main challenge of incremental learning is catastrophic forgetting, which causes a decline in the performance of previously learned tasks after learning a new one. To overcome this challenge, various incremental learning methods have been proposed. In this work, we explore the influence of class ordering on class incremental learning and the resilience of the method to different class orderings. Additionally, we examine how the complexity of incremental learning scenarios or task split strategies affects the model’s performance. We start with a pre-existing approach and then introduce extensions to improve its performance. Experimental results show that the model’s performance is not too significantly impacted by the sequence in which classes are presented, but the complexity of the incremental tasks plays a crucial role in determining the model’s performance. Additionally, starting with a higher number of classes typically results in better performance.The benefits of incremental learning make it desirable for many real-world applications. It enables efficient utilization of resources by eliminating the need to start training from scratch when the considered set of tasks is updated. Additionally, it reduces memory usage, which is particularly important in situations where privacy limitations exist, such as in the healthcare sector where storing patient data for a long time is prohibited. However, the main challenge of incremental learning is catastrophic forgetting, which causes a decline in the performance of previously learned tasks after learning a new one. To overcome this challenge, various incremental learning methods have been proposed. In this work, we explore the influence of class ordering on class incremental learning and the resilience of the method to different class orderings. Additionally, we examine how the complexity of incremental learning scenarios or task split strategies affects the model’s performance. We start with a pre-existing approach and then introduce extensions to improve its performance. Experimental results show that the model’s performance is not too significantly impacted by the sequence in which classes are presented, but the complexity of the incremental tasks plays a crucial role in determining the model’s performance. Additionally, starting with a higher number of classes typically results in better performance
    corecore