134 research outputs found
Optimal control of transitions between nonequilibrium steady states
Biological systems fundamentally exist out of equilibrium in order to
preserve organized structures and processes. Many changing cellular conditions
can be represented as transitions between nonequilibrium steady states, and
organisms have an interest in optimizing such transitions. Using the
Hatano-Sasa Y-value, we extend a recently developed geometrical framework for
determining optimal protocols so that it can be applied to systems driven from
nonequilibrium steady states. We calculate and numerically verify optimal
protocols for a colloidal particle dragged through solution by a translating
optical trap with two controllable parameters. We offer experimental
predictions, specifically that optimal protocols are significantly less costly
than naive ones. Optimal protocols similar to these may ultimately point to
design principles for biological energy transduction systems and guide the
design of artificial molecular machines.Comment: Accepted for publication at PLoS ON
Hamiltonian Monte Carlo Without Detailed Balance
We present a method for performing Hamiltonian Monte Carlo that largely
eliminates sample rejection for typical hyperparameters. In situations that
would normally lead to rejection, instead a longer trajectory is computed until
a new state is reached that can be accepted. This is achieved using Markov
chain transitions that satisfy the fixed point equation, but do not satisfy
detailed balance. The resulting algorithm significantly suppresses the random
walk behavior and wasted function evaluations that are typically the
consequence of update rejection. We demonstrate a greater than factor of two
improvement in mixing time on three test problems. We release the source code
as Python and MATLAB packages.Comment: Accepted conference submission to ICML 2014 and also featured in a
special edition of JMLR. Since updated to include additional literature
citation
Sparse Codes for Speech Predict Spectrotemporal Receptive Fields in the Inferior Colliculus
We have developed a sparse mathematical representation of speech that
minimizes the number of active model neurons needed to represent typical speech
sounds. The model learns several well-known acoustic features of speech such as
harmonic stacks, formants, onsets and terminations, but we also find more
exotic structures in the spectrogram representation of sound such as localized
checkerboard patterns and frequency-modulated excitatory subregions flanked by
suppressive sidebands. Moreover, several of these novel features resemble
neuronal receptive fields reported in the Inferior Colliculus (IC), as well as
auditory thalamus and cortex, and our model neurons exhibit the same tradeoff
in spectrotemporal resolution as has been observed in IC. To our knowledge,
this is the first demonstration that receptive fields of neurons in the
ascending mammalian auditory pathway beyond the auditory nerve can be predicted
based on coding principles and the statistical properties of recorded sounds.Comment: For Supporting Information, see PLoS website:
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.100259
Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality
The mutual information between stimulus and spike-train response is commonly
used to monitor neural coding efficiency, but neuronal computation broadly
conceived requires more refined and targeted information measures of
input-output joint processes. A first step towards that larger goal is to
develop information measures for individual output processes, including
information generation (entropy rate), stored information (statistical
complexity), predictable information (excess entropy), and active information
accumulation (bound information rate). We calculate these for spike trains
generated by a variety of noise-driven integrate-and-fire neurons as a function
of time resolution and for alternating renewal processes. We show that their
time-resolution dependence reveals coarse-grained structural properties of
interspike interval statistics; e.g., -entropy rates that diverge less
quickly than the firing rate indicate interspike interval correlations. We also
find evidence that the excess entropy and regularized statistical complexity of
different types of integrate-and-fire neurons are universal in the
continuous-time limit in the sense that they do not depend on mechanism
details. This suggests a surprising simplicity in the spike trains generated by
these model neurons. Interestingly, neurons with gamma-distributed ISIs and
neurons whose spike trains are alternating renewal processes do not fall into
the same universality class. These results lead to two conclusions. First, the
dependence of information measures on time resolution reveals mechanistic
details about spike train generation. Second, information measures can be used
as model selection tools for analyzing spike train processes.Comment: 20 pages, 6 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/trdctim.ht
- …