29,990 research outputs found
Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks
Recurrent neural networks (RNNs) for reinforcement learning (RL) have shown
distinct advantages, e.g., solving memory-dependent tasks and meta-learning.
However, little effort has been spent on improving RNN architectures and on
understanding the underlying neural mechanisms for performance gain. In this
paper, we propose a novel, multiple-timescale, stochastic RNN for RL. Empirical
results show that the network can autonomously learn to abstract sub-goals and
can self-develop an action hierarchy using internal dynamics in a challenging
continuous control task. Furthermore, we show that the self-developed
compositionality of the network enhances faster re-learning when adapting to a
new task that is a re-composition of previously learned sub-goals, than when
starting from scratch. We also found that improved performance can be achieved
when neural activities are subject to stochastic rather than deterministic
dynamics
Perspective: network-guided pattern formation of neural dynamics
The understanding of neural activity patterns is fundamentally linked to an
understanding of how the brain's network architecture shapes dynamical
processes. Established approaches rely mostly on deviations of a given network
from certain classes of random graphs. Hypotheses about the supposed role of
prominent topological features (for instance, the roles of modularity, network
motifs, or hierarchical network organization) are derived from these
deviations. An alternative strategy could be to study deviations of network
architectures from regular graphs (rings, lattices) and consider the
implications of such deviations for self-organized dynamic patterns on the
network. Following this strategy, we draw on the theory of spatiotemporal
pattern formation and propose a novel perspective for analyzing dynamics on
networks, by evaluating how the self-organized dynamics are confined by network
architecture to a small set of permissible collective states. In particular, we
discuss the role of prominent topological features of brain connectivity, such
as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the
notion of network-guided pattern formation with numerical simulations and
outline how it can facilitate the understanding of neural dynamics
Investigation of sequence processing: A cognitive and computational neuroscience perspective
Serial order processing or sequence processing underlies
many human activities such as speech, language, skill
learning, planning, problem-solving, etc. Investigating
the neural bases of sequence processing enables us to
understand serial order in cognition and also helps in
building intelligent devices. In this article, we review
various cognitive issues related to sequence processing
with examples. Experimental results that give evidence
for the involvement of various brain areas will be described.
Finally, a theoretical approach based on statistical
models and reinforcement learning paradigm is
presented. These theoretical ideas are useful for studying
sequence learning in a principled way. This article
also suggests a two-way process diagram integrating
experimentation (cognitive neuroscience) and theory/
computational modelling (computational neuroscience).
This integrated framework is useful not only in the present
study of serial order, but also for understanding
many cognitive processes
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
- …