11 research outputs found
Delineating Parameter Unidentifiabilities in Complex Models
Scientists use mathematical modelling to understand and predict the
properties of complex physical systems. In highly parameterised models there
often exist relationships between parameters over which model predictions are
identical, or nearly so. These are known as structural or practical
unidentifiabilities, respectively. They are hard to diagnose and make reliable
parameter estimation from data impossible. They furthermore imply the existence
of an underlying model simplification. We describe a scalable method for
detecting unidentifiabilities, and the functional relations defining them, for
generic models. This allows for model simplification, and appreciation of which
parameters (or functions thereof) cannot be estimated from data. Our algorithm
can identify features such as redundant mechanisms and fast timescale
subsystems, as well as the regimes in which such approximations are valid. We
base our algorithm on a novel quantification of regional parametric
sensitivity: multiscale sloppiness. Traditionally, the link between parametric
sensitivity and the conditioning of the parameter estimation problem is made
locally, through the Fisher Information Matrix. This is valid in the regime of
infinitesimal measurement uncertainty. We demonstrate the duality between
multiscale sloppiness and the geometry of confidence regions surrounding
parameter estimates made where measurement uncertainty is non-negligible.
Further theoretical relationships are provided linking multiscale sloppiness to
the Likelihood-ratio test. From this, we show that a local sensitivity analysis
(as typically done) is insufficient for determining the reliability of
parameter estimation, even with simple (non)linear systems. Our algorithm
provides a tractable alternative. We finally apply our methods to a
large-scale, benchmark Systems Biology model of NF-B, uncovering
previously unknown unidentifiabilities
The geometry of Sloppiness
The use of mathematical models in the sciences often involves the estimation of unknown parameter values from data. Sloppiness provides information about the uncertainty of this task. In this paper, we develop a precise mathematical foundation for sloppiness as initially introduced and define rigorously key concepts, such as `model manifold', in relation to concepts of structural identifiability. We redefine sloppiness conceptually as a comparison between the premetric on parameter space induced by measurement noise and a reference metric. This opens up the possibility of alternative quantification of sloppiness, beyond the standard use of the Fisher Information Matrix, which assumes that parameter space is equipped with the usual Euclidean metric and the measurement error is infinitesimal. Applications include parametric statistical models, explicit time dependent models, and ordinary differential equation models
On the performance of nonlinear dynamical systems under parameter perturbation
AbstractWe present a method for analysing the deviation in transient behaviour between two parameterised families of nonlinear ODEs, as initial conditions and parameters are varied within compact sets over which stability is guaranteed. This deviation is taken to be the integral over time of a user-specified, positive definite function of the difference between the trajectories, for instance the L2 norm. We use sum-of-squares programming to obtain two polynomials, which take as inputs the (possibly differing) initial conditions and parameters of the two families of ODEs, and output upper and lower bounds to this transient deviation. Equality can be achieved using symbolic methods in a special case involving Linear Time Invariant Parameter Dependent systems. We demonstrate the utility of the proposed methods in the problems of model discrimination, and location of worst case parameter perturbation for a single parameterised family of ODE models
Optimal plasticity for memory maintenance during ongoing synaptic change.
Synaptic connections in many brain circuits fluctuate, exhibiting substantial turnover and remodelling over hours to days. Surprisingly, experiments show that most of this flux in connectivity persists in the absence of learning or known plasticity signals. How can neural circuits retain learned information despite a large proportion of ongoing and potentially disruptive synaptic changes? We address this question from first principles by analysing how much compensatory plasticity would be required to optimally counteract ongoing fluctuations, regardless of whether fluctuations are random or systematic. Remarkably, we find that the answer is largely independent of plasticity mechanisms and circuit architectures: compensatory plasticity should be at most equal in magnitude to fluctuations, and often less, in direct agreement with previously unexplained experimental observations. Moreover, our analysis shows that a high proportion of learning-independent synaptic change is consistent with plasticity mechanisms that accurately compute error gradients
The geometry of sloppiness
The use of mathematical models in the sciences often involves the estimation of unknown parameter values from data. Sloppiness provides information about the uncertainty of this task. In this paper, we develop a precise mathematical foundation for sloppiness as initially introduced and define rigorously key concepts, such as `model manifold', in relation to concepts of structural identifiability. We redefine sloppiness conceptually as a comparison between the premetric on parameter space induced by measurement noise and a reference metric. This opens up the possibility of alternative quantification of sloppiness, beyond the standard use of the Fisher Information Matrix, which assumes that parameter space is equipped with the usual Euclidean metric and the measurement error is infinitesimal. Applications include parametric statistical models, explicit time dependent models, and ordinary differential equation models
The Geometry of Sloppiness
The use of mathematical models in the sciences often involves the estimation of unknown parameter values from data. Sloppiness provides information about the uncertainty of this task. In this paper, we develop a precise mathematical foundation for sloppiness as initially introduced and define rigorously key concepts, such as `model manifold', in relation to concepts of structural identifiability. We redefine sloppiness conceptually as a comparison between the premetric on parameter space induced by measurement noise and a reference metric. This opens up the possibility of alternative quantification of sloppiness, beyond the standard use of the Fisher Information Matrix, which assumes that parameter space is equipped with the usual Euclidean metric and the measurement error is infinitesimal. Applications include parametric statistical models, explicit time dependent models, and ordinary differential equation models
Recommended from our members
Stable task information from an unstable neural population
Over days and weeks, neural activity representing an animal’s position and movement in sensorimotor cortex has been found to continually reconfigure or ‘drift’ during repeated trials of learned tasks, with no obvious change in behavior. This challenges classical theories, which assume stable engrams underlie stable behavior. However, it is not known whether this drift occurs systematically, allowing downstream circuits to extract consistent information. Analyzing long-term calcium imaging recordings from posterior parietal cortex in mice (Mus musculus), we show that drift is systematically constrained far above chance, facilitating a linear weighted readout of behavioral variables. However, a significant component of drift continually degrades a fixed readout, implying that drift is not confined to a null coding space. We calculate the amount of plasticity required to compensate drift independently of any learning rule, and find that this is within physiologically achievable bounds. We demonstrate that a simple, biologically plausible local learning rule can achieve these bounds, accurately decoding behavior over many days
Recommended from our members
Stable task information from an unstable neural population
Over days and weeks, neural activity representing an animal’s position and movement in sensorimotor cortex has been found to continually reconfigure or ‘drift’ during repeated trials of learned tasks, with no obvious change in behavior. This challenges classical theories, which assume stable engrams underlie stable behavior. However, it is not known whether this drift occurs systematically, allowing downstream circuits to extract consistent information. Analyzing long-term calcium imaging recordings from posterior parietal cortex in mice (Mus musculus), we show that drift is systematically constrained far above chance, facilitating a linear weighted readout of behavioral variables. However, a significant component of drift continually degrades a fixed readout, implying that drift is not confined to a null coding space. We calculate the amount of plasticity required to compensate drift independently of any learning rule, and find that this is within physiologically achievable bounds. We demonstrate that a simple, biologically plausible local learning rule can achieve these bounds, accurately decoding behavior over many days
Human latent-state generalization through prototype learning with discriminative attention
Latent causes that give rise to experience are encountered in complex, high-dimensional feature spaces. How then do people approximate the external world with lower-dimensional internal representations that generalize to novel examples or contexts? Theories suggest internal representations could be determined by discriminative boundaries, or based on the distance from prototypes/exemplars. We developed theoretical models that use both discriminative and prototype/exemplar components to form internal representations via action-reward feedback. We then developed three new latent-state learning tasks to test human use of discrimination attention and prototypes/exemplars. The majority of subjects attended to discriminative features, as well as the covariance of features within a prototype. A minority of subjects relied on a single discriminative feature. Behavior of all subjects was captured by a model that forms prototype representations and deploys context-specific discriminative attention. These results provide insights into the human ability to generalize across causal latent states learned in high-dimensional environments