18,778 research outputs found

    Using Growing Neural Gas Networks to Represent Visual Object Knowledge

    Full text link

    Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

    Get PDF
    Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenario

    Evolutionary robotics and neuroscience

    Get PDF
    No description supplie

    Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos

    Full text link
    Wearable cameras stand out as one of the most promising devices for the upcoming years, and as a consequence, the demand of computer algorithms to automatically understand the videos recorded with them is increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold learning to endow wearable cameras with contextual information regarding the light conditions and the location captured. Results show that non-linear manifold methods can capture contextual patterns from global features without compromising large computational resources. The proposed strategy is used, as an application case, as a switching mechanism to improve the hand-detection problem in egocentric videos.Comment: Submitted for publicatio

    Memory Organization for Invariant Object Recognition and Categorization

    Get PDF
    Using distributed representations of objects enables artificial systems to be more versatile regarding inter- and intra-category variability, improving the appearance-based modeling of visual object understanding. They are built on the hypothesis that object models are structured dynamically using relatively invariant patches of information arranged in visual dictionaries, which can be shared across objects from the same category. However, implementing distributed representations efficiently to support the complexity of invariant object recognition and categorization remains a research problem of outstanding significance for the biological, the psychological, and the computational approach to understanding visual perception. The present work focuses on solutions driven by top-down object knowledge. It is motivated by the idea that, equipped with sensors and processing mechanisms from the neural pathways serving visual perception, biological systems are able to define efficient measures of similarities between properties observed in objects and use these relationships to form natural clusters of object parts that share equivalent ones. Based on the comparison of stimulus-response signatures from these object-to-memory mappings, biological systems are able to identify objects and their kinds. The present work combines biologically inspired mathematical models to develop memory frameworks for artificial systems, where these invariant patches are represented with regular-shaped graphs, whose nodes are labeled with elementary features that capture texture information from object images. It also applies unsupervised clustering techniques to these graph image features to corroborate the existence of natural clusters within their data distribution and determine their composition. The properties of such computational theory include self-organization and intelligent matching of these graph image features based on the similarity and co-occurrence of their captured texture information. The performance to model invariant object recognition and categorization of feature-based artificial systems equipped with each of the developed memory frameworks is validated applying standard methodologies to well-known image libraries found in literature. Additionally, these artificial systems are cross-compared with state-of-the-art alternative solutions. In conclusion, the findings of the present work convey implications for strategies and experimental paradigms to analyze human object memory as well as technical applications for robotics and computer vision
    • …
    corecore