464 research outputs found

    How do we approach intrinsic motivation computationally? : a commentary on: What is intrinsic motivation? A typology of computational approaches. by Pierre-Yves Oudeyer and Frederic Kaplan

    Get PDF
    What is the energy function guiding behavior and learningµ Representationbased approaches like maximum entropy, generative models, sparse coding, or slowness principles can account for unsupervised learning of biologically observed structure in sensory systems from raw sensory data. However, they do not relate to behavior. Behavior-based approaches like reinforcement learning explain animal behavior in well-described situations. However, they rely on high-level representations which they cannot extract from raw sensory data. Combinations of multiple goal functions seems the methodology of choice to understand the complexity of the brain. But what is the set of possible goals. ..

    Conversational Analysis using Utterance-level Attention-based Bidirectional Recurrent Neural Networks

    Full text link
    Recent approaches for dialogue act recognition have shown that context from preceding utterances is important to classify the subsequent one. It was shown that the performance improves rapidly when the context is taken into account. We propose an utterance-level attention-based bidirectional recurrent neural network (Utt-Att-BiRNN) model to analyze the importance of preceding utterances to classify the current one. In our setup, the BiRNN is given the input set of current and preceding utterances. Our model outperforms previous models that use only preceding utterances as context on the used corpus. Another contribution of the article is to discover the amount of information in each utterance to classify the subsequent one and to show that context-based learning not only improves the performance but also achieves higher confidence in the classification. We use character- and word-level features to represent the utterances. The results are presented for character and word feature representations and as an ensemble model of both representations. We found that when classifying short utterances, the closest preceding utterances contributes to a higher degree.Comment: Proceedings of INTERSPEECH 201

    A Multimodal Hierarchial Approach to Robot Learning by Imitation

    Get PDF
    In this paper we propose an approach to robot learning by imitation that uses the multimodal inputs of language, vision and motor. In our approach a student robot learns from a teacher robot how to perform three separate behaviours based on these inputs. We considered two neural architectures for performing this robot learning. First, a one-step hierarchial architecture trained with two different learning approaches either based on Kohonen's self-organising map or based on the Helmholtz machine turns out to be inefficient or not capable of performing differentiated behavior. In response we produced a hierarchial architecture that combines both learning approaches to overcome these problems. In doing so the proposed robot system models specific aspects of learning using concepts of the mirror neuron system (Rizzolatti and Arbib, 1998) with regards to demonstration learning

    A Study of Photosynthesis in Clear Lake, lowa

    Get PDF
    The oxygen and carbon-14 methods were used to measure photosynthesis in Clear Lake, Iowa during 1958 and 1959. Differences in the rates of photosynthesis at widely separated stations were generally small. Daily variations in the rate of photosynthesis were not greater than two-fold. The correlation between the rate of photosynthesis and the incident illumination was 0.81, and the efficiency of utilization of incident light energy was 0.72 per cent. The net gain of organic matter at the phytoplankton level during the period May 1 to November 1 was equivalent to 3480 pounds of glucose per acre

    The measurement of carbon fixation in Clear Lake, Iowa, using carbon-14

    Get PDF

    Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

    Get PDF
    Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenario
    corecore