13 research outputs found

    On the Precarious Path of Reverse Neuro-Engineering

    Get PDF
    In this perspective we provide an example for the limits of reverse engineering in neuroscience. We demonstrate that application of reverse engineering to the study of the design principle of a functional neuro-system with a known mechanism, may result in a perfectly valid but wrong induction of the system's design principle. If in the very simple setup we bring here (static environment, primitive task and practically unlimited access to every piece of relevant information), it is difficult to induce a design principle, what are our chances of exposing biological design principles when more realistic conditions are examined? Implications to the way we do Biology are discussed

    Closing the Loop Between Neurons and Neurotechnology

    Get PDF

    Can biological complexity be reverse engineered?

    Get PDF
    Concerns with the use of engineering approaches in biology have recently been raised. I examine two related challenges to biological research that I call the synchronic and diachronic underdetermination problem. The former refers to challenges associated with the inference of design principles underlying system capacities when the synchronic relations between lower-level processes and higher-level systems capacities are degenerate (many-to-many). The diachronic underdetermination problem regards the problem of reverse engineering a system where the non-linear relations between system capacities and lower-level mechanisms are changing over time. Braun and Marom argue that recent insights to biological complexity leave the aim of reverse engineering hopeless - in principle as well as in practice. While I support their call for systemic approaches to capture the dynamic nature of living systems, I take issue with the conflation of reverse engineering with naïve reductionism. I clarify how the notion of design principles can be more broadly conceived and argue that reverse engineering is compatible with a dynamic view of organisms. It may even help to facilitate an integrated account that bridges the gap between mechanistic and systems approaches

    New Perspectives on the Dialogue between Brains and Machines

    Get PDF
    Brain-machine interfaces (BMIs) are mostly investigated as a means to provide paralyzed people with new communication channels with the external world. However, the communication between brain and artificial devices also offers a unique opportunity to study the dynamical properties of neural systems. This review focuses on bidirectional interfaces, which operate in two ways by translating neural signals into input commands for the device and the output of the device into neural stimuli. We discuss how bidirectional BMIs help investigating neural information processing and how neural dynamics may participate in the control of external devices. In this respect, a bidirectional BMI can be regarded as a fancy combination of neural recording and stimulation apparatus, connected via an artificial body. The artificial body can be designed in virtually infinite ways in order to observe different aspects of neural dynamics and to approximate desired control policies

    Beyond Statistical Significance: Implications of Network Structure on Neuronal Activity

    Get PDF
    It is a common and good practice in experimental sciences to assess the statistical significance of measured outcomes. For this, the probability of obtaining the actual results is estimated under the assumption of an appropriately chosen null-hypothesis. If this probability is smaller than some threshold, the results are deemed statistically significant and the researchers are content in having revealed, within their own experimental domain, a “surprising” anomaly, possibly indicative of a hitherto hidden fragment of the underlying “ground-truth”. What is often neglected, though, is the actual importance of these experimental outcomes for understanding the system under investigation. We illustrate this point by giving practical and intuitive examples from the field of systems neuroscience. Specifically, we use the notion of embeddedness to quantify the impact of a neuron's activity on its downstream neurons in the network. We show that the network response strongly depends on the embeddedness of stimulated neurons and that embeddedness is a key determinant of the importance of neuronal activity on local and downstream processing. We extrapolate these results to other fields in which networks are used as a theoretical framework

    Can neuroscientists ask the wrong questions? On why etiological considerations are essential when modeling cognition

    Get PDF
    It is common in machine-learning research today for scientists to design and train models to perform cognitive capacities, such as object classification, reinforcement learning, navigation and more. Neuroscientists compare the processes of these models with neuronal activity, with the purpose of learning about computations in the brain. These machine-learning models are constrained only by the task they must perform. Therefore, it is a worthwhile scientific finding that the workings of these models are similar to neuronal activity, as several prominent papers reported. This is a promising method to understanding cognition. However, I argue that, to the extent that this method’s aim is to explain how cognitive capacities are performed, it is likely to succeed only when the capacities modelled with machine learning algorithms are the result of a distinct evolutionary or developmental process

    Error-Based Analysis of Optimal Tuning Functions Explains Phenomena Observed in Sensory Neurons

    Get PDF
    Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales

    Can neuroscientists ask the wrong questions? On why etiological considerations are essential when modeling cognition

    Get PDF
    It is common in machine-learning research today for scientists to design and train models to perform cognitive capacities, such as object classification, reinforcement learning, navigation and more. Neuroscientists compare the processes of these models with neuronal activity, with the purpose of learning about computations in the brain. These machine-learning models are constrained only by the task they must perform. Therefore, it is a worthwhile scientific finding that the workings of these models are similar to neuronal activity, as several prominent papers reported. This is a promising method to understanding cognition. However, I argue that, to the extent that this method’s aim is to explain how cognitive capacities are performed, it is likely to succeed only when the capacities modelled with machine learning algorithms are the result of a distinct evolutionary or developmental process

    Decomposition of 3D joint kinematics of walking in Drosophila melanogaster

    Get PDF
    Animals exhibit a rich repertoire of locomotive behaviors. In the context of legged locomotion, i.e. walking, animals can change their heading direction, traverse diverse substrates with different speeds, or can even compensate for the loss of a leg. This versatility emerges from the fact that biological limbs have more joints and/or more degrees of freedom (DOF), i.e. independent directions of motions, than required for any single movement task. However, this further entails that multiple, or even infinitely many, joint configuration can result in the same leg stepping pattern during walking. How the nervous system deals with such kinematic redundancy remains still unknown. One proposed hypothesis is that the nervous system does not control individual DOFs, but uses flexible combinations of groups of anatomical or functional DOFs, referred to as motor synergies. Drosophila melanogaster represents an excellent model organism for studying the motor control of walking, not least because of the extensive genetic toolbox available, which, among others, allows the identification and targeted manipulation of individual neurons or muscles. However, their tiny size and ability for relatively rapid leg movements hampered research on the kinematics at the level of leg joints due to technical limitations until recently. Hence, the main objective of this dissertation was to investigate the three-dimensional (3D) leg joint kinematics of Drosophila during straight walking. For this, I first established a motion capture setup for Drosophila which allowed the accurate reconstruction of the leg joint positions in 3D with high temporal resolution (400 Hz). Afterwards, I created a kinematic leg model based on anatomical landmarks, i.e. joint condyles, extracted from micro computed-tomography scan data. This step was essential insofar that the actual DOFs of the leg joints in Drosophila were currently unknown. By using this kinematic model, I have found that a mobile trochanter-femur joint can best explain the leg movements of the front legs, but is not mandatory in the other leg pairs. Additionally, I demonstrate that rotations of the femur-tibia plane in the middle legs arise from interactions between two joints suggesting that the natural orientation of joint rotational axes can extent the leg movement repertoire without increasing the number of elements to be controlled. Furthermore, each leg pair exhibited distinct joint kinematics in terms of the joint DOFs employed and their angle time courses during swing and stance phases. Since it is proposed that the nervous system could use motor synergies to solve the redundancy problem, I finally aimed to identify kinematic synergies based on the obtained joint angles from the kinematic model. By applying principal component analysis on the mean joint angle sets of leg steps, I found that three kinematic synergies are sufficient to reconstruct the movements of the tarsus tip during stepping for all leg pairs. This suggests that the problem of controlling seven to eight joint DOFs can be in principle reduced to three control parameters. In conclusion, this dissertation provides detailed insights into the leg joint kinematics of Drosophila during forward walking which are relevant for deciphering motor control of walking in insects. When combined with the extensive genetic toolbox offered by Drosophila as model organism, the experimental platform presented here, i.e. the 3D motion capture setup and the kinematic leg model, can facilitate investigations of Drosophila walking behavior in the future
    corecore