16,796 research outputs found
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Current learning machines have successfully solved hard application problems,
reaching high accuracy and displaying seemingly "intelligent" behavior. Here we
apply recent techniques for explaining decisions of state-of-the-art learning
machines and analyze various tasks from computer vision and arcade games. This
showcases a spectrum of problem-solving behaviors ranging from naive and
short-sighted, to well-informed and strategic. We observe that standard
performance evaluation metrics can be oblivious to distinguishing these diverse
problem solving behaviors. Furthermore, we propose our semi-automated Spectral
Relevance Analysis that provides a practically effective way of characterizing
and validating the behavior of nonlinear learning machines. This helps to
assess whether a learned model indeed delivers reliably for the problem that it
was conceived for. Furthermore, our work intends to add a voice of caution to
the ongoing excitement about machine intelligence and pledges to evaluate and
judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication
A roadmap to integrate astrocytes into Systems Neuroscience.
Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease
Presynaptic modulation as fast synaptic switching: state-dependent modulation of task performance
Neuromodulatory receptors in presynaptic position have the ability to
suppress synaptic transmission for seconds to minutes when fully engaged. This
effectively alters the synaptic strength of a connection. Much work on
neuromodulation has rested on the assumption that these effects are uniform at
every neuron. However, there is considerable evidence to suggest that
presynaptic regulation may be in effect synapse-specific. This would define a
second "weight modulation" matrix, which reflects presynaptic receptor efficacy
at a given site. Here we explore functional consequences of this hypothesis. By
analyzing and comparing the weight matrices of networks trained on different
aspects of a task, we identify the potential for a low complexity "modulation
matrix", which allows to switch between differently trained subtasks while
retaining general performance characteristics for the task. This means that a
given network can adapt itself to different task demands by regulating its
release of neuromodulators. Specifically, we suggest that (a) a network can
provide optimized responses for related classification tasks without the need
to train entirely separate networks and (b) a network can blend a "memory mode"
which aims at reproducing memorized patterns and a "novelty mode" which aims to
facilitate classification of new patterns. We relate this work to the known
effects of neuromodulators on brain-state dependent processing.Comment: 6 pages, 13 figure
A feedback model of perceptual learning and categorisation
Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise
Bits from Biology for Computational Intelligence
Computational intelligence is broadly defined as biologically-inspired
computing. Usually, inspiration is drawn from neural systems. This article
shows how to analyze neural systems using information theory to obtain
constraints that help identify the algorithms run by such systems and the
information they represent. Algorithms and representations identified
information-theoretically may then guide the design of biologically inspired
computing systems (BICS). The material covered includes the necessary
introduction to information theory and the estimation of information theoretic
quantities from neural data. We then show how to analyze the information
encoded in a system about its environment, and also discuss recent
methodological developments on the question of how much information each agent
carries about the environment either uniquely, or redundantly or
synergistically together with others. Last, we introduce the framework of local
information dynamics, where information processing is decomposed into component
processes of information storage, transfer, and modification -- locally in
space and time. We close by discussing example applications of these measures
to neural data and other complex systems
Image informatics strategies for deciphering neuronal network connectivity
Brain function relies on an intricate network of highly dynamic neuronal connections that rewires dramatically under the impulse of various external cues and pathological conditions. Among the neuronal structures that show morphologi- cal plasticity are neurites, synapses, dendritic spines and even nuclei. This structural remodelling is directly connected with functional changes such as intercellular com- munication and the associated calcium-bursting behaviour. In vitro cultured neu- ronal networks are valuable models for studying these morpho-functional changes. Owing to the automation and standardisation of both image acquisition and image analysis, it has become possible to extract statistically relevant readout from such networks. Here, we focus on the current state-of-the-art in image informatics that enables quantitative microscopic interrogation of neuronal networks. We describe the major correlates of neuronal connectivity and present workflows for analysing them. Finally, we provide an outlook on the challenges that remain to be addressed, and discuss how imaging algorithms can be extended beyond in vitro imaging studies
- …