44 research outputs found

    The critical difference between Holism and Vitalism in Cassirer’s Philosophy of Science

    Get PDF

    Haptic realism for neuroscience

    Get PDF

    The Embedded Neuron, the Enactive Field?

    Get PDF
    The concept of the receptive field, first articulated by Hartline, is central to visual neuroscience. The receptive field of a neuron encompasses the spatial and temporal properties of stimuli that activate the neuron, and, as Hubel and Wiesel conceived of it, a neuron’s receptive field is static. This makes it possible to build models of neural circuits and to build up more complex receptive fields out of simpler ones. Recent work in visual neurophysiology is providing evidence that the classical receptive field is an inaccurate picture. The receptive field seems to be a dynamic feature of the neuron. In particular, the receptive field of neurons in V1 seems to be dependent on the properties of the stimulus. In this paper, we review the history of the concept of the receptive field and the problematic data. We then consider a number of possible theoretical responses to these data

    Reorienting Realism

    Get PDF
    The way that I seek to redirect the realism debate is away from the question of the reality of unobservable posits of scientific theories and models, and towards the question of whether those theories and models should be interpreted realistically. This makes it easier to include within the realism debate sciences of relatively large and observable items, as are many branches of biology. But it is not a simple trade of the ontological question of realism for a semantic one. My contribution will focus on computational neuroscience. In this discipline, models are normally interpreted as representing computations actually performed by parts of the brain. Semantically, this interpretation is literal and realistic. Ontologically, it supposes that the structure represented mathematically as a computation (i.e. a series of state transitions) there in the brain processes. I call this supposition of a structural similarity (homomorphism) between model and target, formal realism. This stands in contrast to an alternative way to interpret the model which I call formal idealism. The view here is that whatever processes exist in the brain are vastly more complicated than the structures represented in the computational models, and that the aim of modelling is to achieve an acceptable simplification of those processes. Thus, the success of the research is more a matter of structuring than of discovering pre-existing structures. Ultimately, the realism debate is motivated by curiosity about what it is that the best scientific representations have to tell us about the world: is this thing really as presented in the model? Thus, I argue that the contrast between formal realism vs. idealism is a good template for framing the realism debate when discussing the implications of sciences of extremely complex macro and mesoscopic systems, such as the nervous system, and generalising to elsewhere in biology, including ecology, as well as the physical sciences of large complex systems such as climate and geological formations. Formal idealism does not suppose that the structures given in scientific models are fully constructed or mind-dependent, but that there is an eliminable human component in all scientific representations, due to the fact that they can never depict the full complexity of their target systems and as such are the result of human decisions about how to simplify. The acceptability of certain simplifications (abstractions and idealisations) over others is due to a number of factors, including predictive accuracy, mathematical/computational tractability, and the envisaged technological applications of the model. Formal realism supposes that scientific representations are, at their best, a clear-view window onto mind-independent nature, whereas formal idealism maintains that this is an unrealistic way to describe the practices and achievements of science

    Crash Testing an Engineering Framework in Neuroscience: Does the Idea of Robustness Break Down?

    Get PDF
    In this paper I discuss the concept of robustness in neuroscience. Various mechanisms for making systems robust have been discussed across biology and neuroscience (e.g. redundancy and fail-safes). Many of these notions originate from engineering. I argue that concepts borrowed from engineering aid neuroscientists in (1) operationalizing robustness; (2) formulating hypotheses about mechanisms for robustness; and (3) quantifying robustness. Furthermore, I argue that the significant disanalogies between brains and engineered artefacts raise important questions about the applicability of the engineering framework. I argue that the use of such concepts should be understood as a kind of simplifying idealization

    A New Perceptual Bias Reveals Suboptimal Population Decoding of Sensory Responses

    Get PDF
    Several studies have reported optimal population decoding of sensory responses in two-alternative visual discrimination tasks. Such decoding involves integrating noisy neural responses into a more reliable representation of the likelihood that the stimuli under consideration evoked the observed responses. Importantly, an ideal observer must be able to evaluate likelihood with high precision and only consider the likelihood of the two relevant stimuli involved in the discrimination task. We report a new perceptual bias suggesting that observers read out the likelihood representation with remarkably low precision when discriminating grating spatial frequencies. Using spectrally filtered noise, we induced an asymmetry in the likelihood function of spatial frequency. This manipulation mainly affects the likelihood of spatial frequencies that are irrelevant to the task at hand. Nevertheless, we find a significant shift in perceived grating frequency, indicating that observers evaluate likelihoods of a broad range of irrelevant frequencies and discard prior knowledge of stimulus alternatives when performing two-alternative discrimination

    First principles in the life sciences: the free-energy principle, organicism, and mechanism

    Get PDF
    The free-energy principle states that all systems that minimize their free energy resist a tendency to physical disintegration. Originally proposed to account for perception, learning, and action, the free-energy principle has been applied to the evolution, development, morphology, anatomy and function of the brain, and has been called a postulate, an unfalsifiable principle, a natural law, and an imperative. While it might afford a theoretical foundation for understanding the relationship between environment, life, and mind, its epistemic status is unclear. Also unclear is how the free-energy principle relates to prominent theoretical approaches to life science phenomena, such as organicism and mechanism. This paper clarifies both issues, and identifies limits and prospects for the free-energy principle as a first principle in the life sciences

    From Computer Metaphor to Computational Modeling: The Evolution of Computationalism

    Get PDF
    In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of working memory to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive (neuro)science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year
    corecore