1,023 research outputs found

    Replicability or reproducibility? On the replication crisis in computational neuroscience and sharing only relevant detail

    Get PDF
    Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code

    Shape recognition and classification in electro-sensing

    Full text link
    This paper aims at advancing the field of electro-sensing. It exhibits the physical mechanism underlying shape perception for weakly electric fish. These fish orient themselves at night in complete darkness by employing their active electrolocation system. They generate a stable, high-frequency, weak electric field and perceive the transdermal potential modulations caused by a nearby target with different admittivity than the surrounding water. In this paper, we explain how weakly electric fish might identify and classify a target, knowing by advance that the latter belongs to a certain collection of shapes. Our model of the weakly electric fish relies on differential imaging, i.e., by forming an image from the perturbations of the field due to targets, and physics-based classification. The electric fish would first locate the target using a specific location search algorithm. Then it could extract, from the perturbations of the electric field, generalized (or high-order) polarization tensors of the target. Computing, from the extracted features, invariants under rigid motions and scaling yields shape descriptors. The weakly electric fish might classify a target by comparing its invariants with those of a set of learned shapes. On the other hand, when measurements are taken at multiple frequencies, the fish might exploit the shifts and use the spectral content of the generalized polarization tensors to dramatically improve the stability with respect to measurement noise of the classification procedure in electro-sensing. Surprisingly, it turns out that the first-order polarization tensor at multiple frequencies could be enough for the purpose of classification. A procedure to eliminate the background field in the case where the permittivity of the surrounding medium can be neglected, and hence improve further the stability of the classification process, is also discussed.Comment: 10 pages, 15 figure

    Nonhuman gamblers: lessons from rodents, primates, and robots

    Get PDF
    The search for neuronal and psychological underpinnings of pathological gambling in humans would benefit from investigating related phenomena also outside of our species. In this paper, we present a survey of studies in three widely different populations of agents, namely rodents, non-human primates, and robots. Each of these populations offers valuable and complementary insights on the topic, as the literature demonstrates. In addition, we highlight the deep and complex connections between relevant results across these different areas of research (i.e., cognitive and computational neuroscience, neuroethology, cognitive primatology, neuropsychiatry, evolutionary robotics), to make the case for a greater degree of methodological integration in future studies on pathological gambling

    The Structure of Sensorimotor Explanation

    Get PDF
    The sensorimotor theory of vision and visual consciousness is often described as a radical alternative to the computational and connectionist orthodoxy in the study of visual perception. However, it is far from clear whether the theory represents a significant departure from orthodox approaches or whether it is an enrichment of it. In this study, I tackle this issue by focusing on the explanatory structure of the sensorimotor theory. I argue that the standard formulation of the theory subscribes to the same theses of the dynamical hypothesis and that it affords covering-law explanations. This however exposes the theory to the mere description worry and generates a puzzle about the role of representations. I then argue that the sensorimotor theory is compatible with a mechanistic framework, and show how this can overcome the mere description worry and solve the problem of the explanatory role of representations. By doing so, it will be shown that the theory should be understood as an enrichment of the orthodoxy, rather than an alternative

    Deciphering the brain's codes

    Get PDF
    The two sensory systems discussed use similar algorithms for the synthesis of the neuronal selectivity for the stimulus that releases a particular behavior, although the neural circuits, the brain sites involved, and even the species are different. This stimulus selectivity emerges gradually in a neural network organized according to parallel and hierarchical design principles. The parallel channels contain lower order stations with special circuits for the creation of neuronal selectivities for different features of the stimulus. Convergence of the parallel pathways brings these selectivities together at a higher order station for the eventual synthesis of the selectivity for the whole stimulus pattern. The neurons that are selective for the stimulus are at the top of the hierarchy, and they form the interface between the sensory and motor systems or between sensory systems of different modalities. The similarities of these two systems at the level of algorithms suggest the existence of rules of signal processing that transcend different sensory systems and species of animals

    Learning, Social Intelligence and the Turing Test - why an "out-of-the-box" Turing Machine will not pass the Turing Test

    Get PDF
    The Turing Test (TT) checks for human intelligence, rather than any putative general intelligence. It involves repeated interaction requiring learning in the form of adaption to the human conversation partner. It is a macro-level post-hoc test in contrast to the definition of a Turing Machine (TM), which is a prior micro-level definition. This raises the question of whether learning is just another computational process, i.e. can be implemented as a TM. Here we argue that learning or adaption is fundamentally different from computation, though it does involve processes that can be seen as computations. To illustrate this difference we compare (a) designing a TM and (b) learning a TM, defining them for the purpose of the argument. We show that there is a well-defined sequence of problems which are not effectively designable but are learnable, in the form of the bounded halting problem. Some characteristics of human intelligence are reviewed including it's: interactive nature, learning abilities, imitative tendencies, linguistic ability and context-dependency. A story that explains some of these is the Social Intelligence Hypothesis. If this is broadly correct, this points to the necessity of a considerable period of acculturation (social learning in context) if an artificial intelligence is to pass the TT. Whilst it is always possible to 'compile' the results of learning into a TM, this would not be a designed TM and would not be able to continually adapt (pass future TTs). We conclude three things, namely that: a purely "designed" TM will never pass the TT; that there is no such thing as a general intelligence since it necessary involves learning; and that learning/adaption and computation should be clearly distinguished.Comment: 10 pages, invited talk at Turing Centenary Conference CiE 2012, special session on "The Turing Test and Thinking Machines
    • 

    corecore