4,020 research outputs found

    Flexible couplings: diffusing neuromodulators and adaptive robotics

    Get PDF
    Recent years have seen the discovery of freely diffusing gaseous neurotransmitters, such as nitric oxide (NO), in biological nervous systems. A type of artificial neural network (ANN) inspired by such gaseous signaling, the GasNet, has previously been shown to be more evolvable than traditional ANNs when used as an artificial nervous system in an evolutionary robotics setting, where evolvability means consistent speed to very good solutions¿here, appropriate sensorimotor behavior-generating systems. We present two new versions of the GasNet, which take further inspiration from the properties of neuronal gaseous signaling. The plexus model is inspired by the extraordinary NO-producing cortical plexus structure of neural fibers and the properties of the diffusing NO signal it generates. The receptor model is inspired by the mediating action of neurotransmitter receptors. Both models are shown to significantly further improve evolvability. We describe a series of analyses suggesting that the reasons for the increase in evolvability are related to the flexible loose coupling of distinct signaling mechanisms, one ¿chemical¿ and one ¿electrical.

    Developmental disorders

    Get PDF
    Introduction: Connectionist models have recently provided a concrete computational platform from which to explore how different initial constraints in the cognitive system can interact with an environment to generate the behaviors we find in normal development (Elman et al., 1996; Mareschal & Thomas, 2000). In this sense, networks embody several principles inherent to Piagetian theory, the major developmental theory of the twentieth century. By extension, these models provide the opportunity to explore how shifts in these initial constraints (or boundary conditions) can result in the emergence of the abnormal behaviors we find in atypical development. Although this field is very new, connectionist models have already been put forward to explain disordered language development in Specific Language Impairment (Hoeffner & McClelland, 1993), Williams Syndrome (Thomas & Karmiloff-Smith, 1999), and developmental dyslexia (Seidenberg and colleagues, see e.g. Harm & Seidenberg, in press); to explain unusual characteristics of perceptual discrimination in autism (Cohen, 1994; Gustafsson, 1997); and to explore the emergence of disordered cortical feature maps using a neurobiologically constrained model (Oliver, Johnson, Karmiloff-Smith, & Pennington, in press). In this entry, we will examine the types of initial constraints that connectionist modelers typically build in to their models, and how variations in these constraints have been proposed as possible accounts of the causes of particular developmental disorders. In particular, we will examine the claim that these constraints are candidates for what will constitute innate knowledge. First, however, we need to consider a current debate concerning whether developmental disorders are a useful tool to explore the (possibly innate) structure of the normal cognitive system. We will find that connectionist approaches are much more consistent with one side of this debate than the other

    Massively Parallel Video Networks

    Full text link
    We introduce a class of causal video understanding models that aims to improve efficiency of video processing by maximising throughput, minimising latency, and reducing the number of clock cycles. Leveraging operation pipelining and multi-rate clocks, these models perform a minimal amount of computation (e.g. as few as four convolutional layers) for each frame per timestep to produce an output. The models are still very deep, with dozens of such operations being performed but in a pipelined fashion that enables depth-parallel computation. We illustrate the proposed principles by applying them to existing image architectures and analyse their behaviour on two video tasks: action recognition and human keypoint localisation. The results show that a significant degree of parallelism, and implicitly speedup, can be achieved with little loss in performance.Comment: Fixed typos in densenet model definition in appendi
    • …
    corecore