12,165 research outputs found
Memory effects in biochemical networks as the natural counterpart of extrinsic noise
We show that in the generic situation where a biological network, e.g. a
protein interaction network, is in fact a subnetwork embedded in a larger
"bulk" network, the presence of the bulk causes not just extrinsic noise but
also memory effects. This means that the dynamics of the subnetwork will depend
not only on its present state, but also its past. We use projection techniques
to get explicit expressions for the memory functions that encode such memory
effects, for generic protein interaction networks involving binary and unary
reactions such as complex formation and phosphorylation, respectively.
Remarkably, in the limit of low intrinsic copy-number noise such expressions
can be obtained even for nonlinear dependences on the past. We illustrate the
method with examples from a protein interaction network around epidermal growth
factor receptor (EGFR), which is relevant to cancer signalling. These examples
demonstrate that inclusion of memory terms is not only important conceptually
but also leads to substantially higher quantitative accuracy in the predicted
subnetwork dynamics
Nonlinear dynamical tides in white dwarf binaries
Compact white dwarf (WD) binaries are important sources for space-based
gravitational-wave (GW) observatories, and an increasing number of them are
being identified by surveys like ZTF. We study the effects of nonlinear
dynamical tides in such binaries. We focus on the global three-mode parametric
instability and show that it has a much lower threshold energy than the local
wave-breaking condition studied previously. By integrating networks of coupled
modes, we calculate the tidal dissipation rate as a function of orbital period.
We construct phenomenological models that match these numerical results and use
them to evaluate the spin and luminosity evolution of a WD binary. While in
linear theory the WD's spin frequency can lock to the orbital frequency, we
find that such a lock cannot be maintained when nonlinear effects are taken
into account. Instead, as the orbit decays, the spin and orbit go in and out of
synchronization. Each time they go out of synchronization, there is a brief but
significant dip in the tidal heating rate. While most WDs in compact binaries
should have luminosities that are similar to previous traveling-wave estimates,
a few percent should be about ten times dimmer because they reside in heating
rate dips. This offers a potential explanation for the low luminosity of the CO
WD in J0651. Lastly, we consider the impact of tides on the GW signal and show
that LISA and TianGO can constrain the WD's moment of inertia to better than 1%
for deci-Hz systems.Comment: 21 pages, 18 figures. Submitted to MNRA
How single neuron properties shape chaotic dynamics and signal transmission in random neural networks
While most models of randomly connected networks assume nodes with simple
dynamics, nodes in realistic highly connected networks, such as neurons in the
brain, exhibit intrinsic dynamics over multiple timescales. We analyze how the
dynamical properties of nodes (such as single neurons) and recurrent
connections interact to shape the effective dynamics in large randomly
connected networks. A novel dynamical mean-field theory for strongly connected
networks of multi-dimensional rate units shows that the power spectrum of the
network activity in the chaotic phase emerges from a nonlinear sharpening of
the frequency response function of single units. For the case of
two-dimensional rate units with strong adaptation, we find that the network
exhibits a state of "resonant chaos", characterized by robust, narrow-band
stochastic oscillations. The coherence of stochastic oscillations is maximal at
the onset of chaos and their correlation time scales with the adaptation
timescale of single units. Surprisingly, the resonance frequency can be
predicted from the properties of isolated units, even in the presence of
heterogeneity in the adaptation parameters. In the presence of these
internally-generated chaotic fluctuations, the transmission of weak,
low-frequency signals is strongly enhanced by adaptation, whereas signal
transmission is not influenced by adaptation in the non-chaotic regime. Our
theoretical framework can be applied to other mechanisms at the level of single
nodes, such as synaptic filtering, refractoriness or spike synchronization.
These results advance our understanding of the interaction between the dynamics
of single units and recurrent connectivity, which is a fundamental step toward
the description of biologically realistic network models in the brain, or, more
generally, networks of other physical or man-made complex dynamical units
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
Despite the widespread practical success of deep learning methods, our
theoretical understanding of the dynamics of learning in deep neural networks
remains quite sparse. We attempt to bridge the gap between the theory and
practice of deep learning by systematically analyzing learning dynamics for the
restricted case of deep linear neural networks. Despite the linearity of their
input-output map, such networks have nonlinear gradient descent dynamics on
weights that change with the addition of each new hidden layer. We show that
deep linear networks exhibit nonlinear learning phenomena similar to those seen
in simulations of nonlinear networks, including long plateaus followed by rapid
transitions to lower error solutions, and faster convergence from greedy
unsupervised pretraining initial conditions than from random initial
conditions. We provide an analytical description of these phenomena by finding
new exact solutions to the nonlinear dynamics of deep learning. Our theoretical
analysis also reveals the surprising finding that as the depth of a network
approaches infinity, learning speed can nevertheless remain finite: for a
special class of initial conditions on the weights, very deep networks incur
only a finite, depth independent, delay in learning speed relative to shallow
networks. We show that, under certain conditions on the training data,
unsupervised pretraining can find this special class of initial conditions,
while scaled random Gaussian initializations cannot. We further exhibit a new
class of random orthogonal initial conditions on weights that, like
unsupervised pre-training, enjoys depth independent learning times. We further
show that these initial conditions also lead to faithful propagation of
gradients even in deep nonlinear networks, as long as they operate in a special
regime known as the edge of chaos.Comment: Submission to ICLR2014. Revised based on reviewer feedbac
A geometric method for model reduction of biochemical networks with polynomial rate functions
Model reduction of biochemical networks relies on the knowledge of slow and
fast variables. We provide a geometric method, based on the Newton polytope, to
identify slow variables of a biochemical network with polynomial rate
functions. The gist of the method is the notion of tropical equilibration that
provides approximate descriptions of slow invariant manifolds. Compared to
extant numerical algorithms such as the intrinsic low dimensional manifold
method, our approach is symbolic and utilizes orders of magnitude instead of
precise values of the model parameters. Application of this method to a large
collection of biochemical network models supports the idea that the number of
dynamical variables in minimal models of cell physiology can be small, in spite
of the large number of molecular regulatory actors
- …