17,303 research outputs found

    Goal-Directed Behavior under Variational Predictive Coding: Dynamic Organization of Visual Attention and Working Memory

    Full text link
    Mental simulation is a critical cognitive function for goal-directed behavior because it is essential for assessing actions and their consequences. When a self-generated or externally specified goal is given, a sequence of actions that is most likely to attain that goal is selected among other candidates via mental simulation. Therefore, better mental simulation leads to better goal-directed action planning. However, developing a mental simulation model is challenging because it requires knowledge of self and the environment. The current paper studies how adequate goal-directed action plans of robots can be mentally generated by dynamically organizing top-down visual attention and visual working memory. For this purpose, we propose a neural network model based on variational Bayes predictive coding, where goal-directed action planning is formulated by Bayesian inference of latent intentional space. Our experimental results showed that cognitively meaningful competencies, such as autonomous top-down attention to the robot end effector (its hand) as well as dynamic organization of occlusion-free visual working memory, emerged. Furthermore, our analysis of comparative experiments indicated that introduction of visual working memory and the inference mechanism using variational Bayes predictive coding significantly improve the performance in planning adequate goal-directed actions

    Environmental, developmental, and genetic factors controlling root system architecture

    Get PDF
    A better understanding of the development and architecture of roots is essential to develop strategies to increase crop yield and optimize agricultural land use. Roots control nutrient and water uptake, provide anchoring and mechanical support and can serve as important storage organs. Root growth and development is under tight genetic control and modulated by developmental cues including plant hormones and the environment. This review focuses on root architecture and its diversity and the role of environment, nutrient, and water as well as plant hormones and their interactions in shaping root architecture

    Neural coding strategies and mechanisms of competition

    Get PDF
    A long running debate has concerned the question of whether neural representations are encoded using a distributed or a local coding scheme. In both schemes individual neurons respond to certain specific patterns of pre-synaptic activity. Hence, rather than being dichotomous, both coding schemes are based on the same representational mechanism. We argue that a population of neurons needs to be capable of learning both local and distributed representations, as appropriate to the task, and should be capable of generating both local and distributed codes in response to different stimuli. Many neural network algorithms, which are often employed as models of cognitive processes, fail to meet all these requirements. In contrast, we present a neural network architecture which enables a single algorithm to efficiently learn, and respond using, both types of coding scheme

    Improving the predictability of take-off times with Machine Learning : a case study for the Maastricht upper area control centre area of responsibility

    Get PDF
    The uncertainty of the take-off time is a major contribution to the loss of trajectory predictability. At present, the Estimated Take-Off Time (ETOT) for each individual flight is extracted from the Enhanced Traffic Flow Management System (ETFMS) messages, which are sent each time there is an event triggering a recalculation of the flight data by the Network Man- ager Operations Centre. However, aircraft do not always take- off at the ETOTs reported by the ETFMS due to several factors, including congestion and bad weather conditions at the departure airport, reactionary delays and air traffic flow management slot improvements. This paper presents two machine learning models that take into account several of these factors to improve the take- off time prediction of individual flights one hour before their estimated off-block time. Predictions performed by the model trained on three years of historical flight and weather data show a reduction on the take-off time prediction error of about 30% as compared to the ETOTs reported by the ETFMS.Peer ReviewedPostprint (published version

    New Ideas for Brain Modelling

    Full text link
    This paper describes some biologically-inspired processes that could be used to build the sort of networks that we associate with the human brain. New to this paper, a 'refined' neuron will be proposed. This is a group of neurons that by joining together can produce a more analogue system, but with the same level of control and reliability that a binary neuron would have. With this new structure, it will be possible to think of an essentially binary system in terms of a more variable set of values. The paper also shows how recent research associated with the new model, can be combined with established theories, to produce a more complete picture. The propositions are largely in line with conventional thinking, but possibly with one or two more radical suggestions. An earlier cognitive model can be filled in with more specific details, based on the new research results, where the components appear to fit together almost seamlessly. The intention of the research has been to describe plausible 'mechanical' processes that can produce the appropriate brain structures and mechanisms, but that could be used without the magical 'intelligence' part that is still not fully understood. There are also some important updates from an earlier version of this paper

    Learning long-range spatial dependencies with horizontal gated-recurrent units

    Full text link
    Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching -- and sometimes even surpassing -- human accuracy on a variety of visual recognition tasks. Here, however, we show that these neural networks and their recent extensions struggle in recognition tasks where co-dependent visual features must be detected over long spatial ranges. We introduce the horizontal gated-recurrent unit (hGRU) to learn intrinsic horizontal connections -- both within and across feature columns. We demonstrate that a single hGRU layer matches or outperforms all tested feedforward hierarchical baselines including state-of-the-art architectures which have orders of magnitude more free parameters. We further discuss the biological plausibility of the hGRU in comparison to anatomical data from the visual cortex as well as human behavioral data on a classic contour detection task.Comment: Published at NeurIPS 2018 https://papers.nips.cc/paper/7300-learning-long-range-spatial-dependencies-with-horizontal-gated-recurrent-unit

    The cognitive neuroscience of visual working memory

    Get PDF
    Visual working memory allows us to temporarily maintain and manipulate visual information in order to solve a task. The study of the brain mechanisms underlying this function began more than half a century ago, with Scoville and Milner’s (1957) seminal discoveries with amnesic patients. This timely collection of papers brings together diverse perspectives on the cognitive neuroscience of visual working memory from multiple fields that have traditionally been fairly disjointed: human neuroimaging, electrophysiological, behavioural and animal lesion studies, investigating both the developing and the adult brain

    Evolutionary computation for bottom-up hypothesis generation on emotion and communication

    Get PDF
    Through evolutionary computation, affective models may emerge autonomously in unanticipated ways. We explored whether core affect would be leveraged through communication with conspecifics (e.g. signalling danger or foraging opportunities). Genetic algorithms served to evolve recurrent neural networks controlling virtual agents in an environment with fitness-increasing food and fitness-reducing predators. Previously, neural oscillations emerged serendipitously, with higher frequencies for positive than negative stimuli, which we replicated here in the fittest agent. The setup was extended so that oscillations could be exapted for the communication between two agents. An adaptive communicative function evolved, as shown by fitness benefits relative to (1) a non-communicative reference simulation and (2) lesioning of the connections used for communication. An exaptation of neural oscillations for communication was not observed but a simpler type of communication developed than was initially expected. The agents approached each other in a periodic fashion and slightly modified these movements to approach food or avoid predators. The coupled agents, though controlled by separate networks, appeared to self-assemble into a single vibrating organism. The simulations (a) strengthen an account of core affect as an oscillatory modulation of neural-network competition, and (b) encourage further work on the exaptation of core affect for communicative purposes
    • …
    corecore