437 research outputs found
Odor segmentation and identification in an abstract large-scale model of the mammalian olfactory system
Hebbian fast plasticity and working memory
Theories and models of working memory (WM) were at least since the mid-1990s
dominated by the persistent activity hypothesis. The past decade has seen
rising concerns about the shortcomings of sustained activity as the mechanism
for short-term maintenance of WM information in the light of accumulating
experimental evidence for so-called activity-silent WM and the fundamental
difficulty in explaining robust multi-item WM. In consequence, alternative
theories are now explored mostly in the direction of fast synaptic plasticity
as the underlying mechanism.The question of non-Hebbian vs Hebbian synaptic
plasticity emerges naturally in this context. In this review we focus on fast
Hebbian plasticity and trace the origins of WM theories and models building on
this form of associative learning.Comment: 12 pages, 2 figures, 1 box, submitte
Attractor Hypothesis of Associative Cortex: Insights from a Biophysically Detailed Network Model
Characterizing Deep-Learning I/O Workloads in TensorFlow
The performance of Deep-Learning (DL) computing frameworks rely on the
performance of data ingestion and checkpointing. In fact, during the training,
a considerable high number of relatively small files are first loaded and
pre-processed on CPUs and then moved to accelerator for computation. In
addition, checkpointing and restart operations are carried out to allow DL
computing frameworks to restart quickly from a checkpoint. Because of this, I/O
affects the performance of DL applications. In this work, we characterize the
I/O performance and scaling of TensorFlow, an open-source programming framework
developed by Google and specifically designed for solving DL problems. To
measure TensorFlow I/O performance, we first design a micro-benchmark to
measure TensorFlow reads, and then use a TensorFlow mini-application based on
AlexNet to measure the performance cost of I/O and checkpointing in TensorFlow.
To improve the checkpointing performance, we design and implement a burst
buffer. We find that increasing the number of threads increases TensorFlow
bandwidth by a maximum of 2.3x and 7.8x on our benchmark environments. The use
of the tensorFlow prefetcher results in a complete overlap of computation on
accelerator and input pipeline on CPU eliminating the effective cost of I/O on
the overall performance. The use of a burst buffer to checkpoint to a fast
small capacity storage and copy asynchronously the checkpoints to a slower
large capacity storage resulted in a performance improvement of 2.6x with
respect to checkpointing directly to slower storage on our benchmark
environment.Comment: Accepted for publication at pdsw-DISCS 201
Stimulus detection rate and latency, firing rates and 1–40Hz oscillatory power are modulated by infra-slow fluctuations in a bistable attractor network model
Recordings of membrane and field potentials, firing rates, and oscillation amplitude dynamics show that neuronal activity levels in cortical and subcortical structures exhibit infra-slow fluctuations (ISFs) on time scales from seconds to hundreds of seconds. Similar ISFs are salient also in blood-oxygenation-level dependent (BOLD) signals as well as in psychophysical time series. Functional consequences of ISFs are not fully understood. Here, they were investigated along with dynamical implications of ISFs in large-scale simulations of cortical network activity. For this purpose, a biophysically detailed hierarchical attractor network model displaying bistability and operating in an oscillatory regime was used. ISFs were imposed as slow fluctuations in either the amplitude or frequency of fast synaptic noise. We found that both mechanisms produced an ISF component in the synthetic local field potentials (LFPs) and modulated the power of 1–40 Hz oscillations. Crucially, in a simulated threshold-stimulus detection task (TSDT), these ISFs were strongly correlated with stimulus detection probabilities and latencies. The results thus show that several phenomena observed in many empirical studies emerge concurrently in the model dynamics, which yields mechanistic insight into how infra-slow excitability fluctuations in large-scale neuronal networks may modulate fast oscillations and perceptual processing. The model also makes several novel predictions that can be experimentally tested in future studies
Gamma and Beta Bursts Underlie Working Memory
Working memory is thought to result from sustained neuron spiking. However, computational models suggest complex dynamics with discrete oscillatory bursts. We analyzed local field potential (LFP) and spiking from the prefrontal cortex (PFC) of monkeys performing a working memory task. There were brief bursts of narrow-band gamma oscillations (45–100 Hz), varied in time and frequency, accompanying encoding and re-activation of sensory information. They appeared at a minority of recording sites associated with spiking reflecting the to-be-remembered items. Beta oscillations (20–35 Hz) also occurred in brief, variable bursts but reflected a default state interrupted by encoding and decoding. Only activity of neurons reflecting encoding/decoding correlated with changes in gamma burst rate. Thus, gamma bursts could gate access to, and prevent sensory interference with, working memory. This supports the hypothesis that working memory is manifested by discrete oscillatory dynamics and spiking, not sustained activity.National Institute of Mental Health (U.S.) (Grant 5R01MH091174-05)National Institute of Mental Health (U.S.) (Grant 5R37MH087027-07
Metaheuristic conditional neural network for harvesting skyrmionic metastable states
We present a metaheuristic conditional neural-network-based method aimed at
identifying physically interesting metastable states in a potential energy
surface of high rugosity. To demonstrate how this method works, we identify and
analyze spin textures with topological charge ranging from 1 to
(where antiskyrmions have ) in the Pd/Fe/Ir(111) system, which we model
using a classical atomistic spin Hamiltonian based on parameters computed from
density functional theory. To facilitate the harvest of relevant spin textures,
we make use of the newly developed Segment Anything Model (SAM). Spin textures
with ranging from to are further analyzed using
finite-temperature spin-dynamics simulations. We observe that for temperatures
up to around 20\,K, lifetimes longer than 200\,ps are predicted, and that when
these textures decay, new topological spin textures are formed. We also find
that the relative stability of the spin textures depend linearly on the
topological charge, but only when comparing the most stable antiskyrmions for
each topological charge. In general, the number of holes (i.e.,
non-self-intersecting curves that define closed domain walls in the structure)
in the spin texture is an important predictor of stability -- the more holes,
the less stable is the texture. Methods for systematic identification and
characterization of complex metastable skyrmionic textures -- such as the one
demonstrated here -- are highly relevant for advancements in the field of
topological spintronics
- …