8 research outputs found

    Modifying TALYS to Implement Custom Nuclear Level Densities

    No full text
    Abstract Many codes exist that model nuclear reactions. Many of them use different models to simulate reactions. We are working to use a more modern code (TALYS) which incorporates newer models to replicate calculations made by an older code (STAPRE). This project focused on modifying TALYS to read in nuclear level densities from a file and implement them correctly. We hoped that if TALYS used the same level densities as STAPRE, that it would produce results within 5% of what STAPRE made. However, modifying only the level densities did not produce the desired results. After adjusting several input parameters, we still find large differences between the two codes.

    The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    No full text
    <div><p>We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a <i>sloppy system</i>–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.</p></div

    FIM for the four EGFR models.

    No full text
    <p>Both the approximate Michaelis-Menten kinetics and mechanistic mass-action kinetics are unidentifiable when fit to the data in reference [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005227#pcbi.1005227.ref007" target="_blank">7</a>]. Although the optimal experiments in reference [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005227#pcbi.1005227.ref013" target="_blank">13</a>] lead to an identifiable (but still sloppy) model for the approximate Michaelis-Menten kinetics, the mechanistic mass-action kinetics remain unidentifiable. Furthermore, the FIM of the mass-action model suggests that a minimal model should include <i>at least</i> 60 parameters to explain the expanded observations, i.e., the manifold has approximately 60 widths larger than the experimental noise. The approximate Michaelis-Menten kinetics do not contain all of the relevant physics. The red dashed line corresonds to a relative standard error of 1/<i>e</i> in the inferred parameters.</p

    Model manifold widths define relevant and irrelevant parameters.

    No full text
    <p>(Left) The set of all possible model outputs defines a manifold of predictions. The true model ideally corresponds to a point near the manifold (red dot). For typical sloppy models, the manifold is bounded by a hierarchy of widths that are approximately given by the square-roots of the FIM eigenvalues (when parameterized in natural units). Widths of the model manifold are measured in units of the standard-deviation of the data, so that widths much less than one are practically indistinguishable from noise. Widths larger than one, on the other hand, are distinguishable from noise and must be tuned to reproduce the observations. This suggests describing parameter combinations corresponding to large eigenvalues and large widths as relevant or important for the model. In contrast, those parameters corresponding to small eigenvalues and widths are irrelevant or unimportant. We describe widths comparable to the experimental noise as marginal.</p

    Quantifying model error.

    No full text
    <p>As in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005227#pcbi.1005227.g002" target="_blank">Fig 2</a>, the model of interest forms a statistical manifold in data space, represented by the black dashed line. Another more realistic model also forms a statistical manifold of higher dimension (red surface). Experimental observations (blue dot) are generated by adding Gaussian noise of size <i>σ</i> to a “true” model (red dot). The least squares estimate is the point on the approximate model (black dot) nearest to the experimental observations. However, the distance from the best fit to the observed data has contributions from both the experimental noise and the model error.</p

    An example of a sloppy system.

    No full text
    <p>Observations of an EGFR signaling network can be explained by a model that is identifiable and not sloppy. The 18 parameter model has FIM eigenvalues that span fewer than 4 orders of magnitude and are all larger than one. By including additional mechanisms in the model (more parameters) the models become increasingly sloppy and less identifiable. The FIM eigenvalues ultimately span more than 16 orders of magnitude, leading to the large parameter uncertainties reported in reference [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005227#pcbi.1005227.ref007" target="_blank">7</a>].</p

    Sloppiness vs. identifiability.

    No full text
    <p>Although sloppiness and parameter identifiability are closely related, they are actually two distinct concepts. Sloppiness refers to an approximate uniform spacing of FIM eigenvalues spread over many orders of magnitude. In the most commmon case (first column) this means that many eigenvalues will be small and also correspond to unidentifiable parameter combinations. However, it is possible (in principle) for all the eigenvalues to be large (second column) so that sloppy models can be identifiable (as in references [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005227#pcbi.1005227.ref013" target="_blank">13</a>, <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005227#pcbi.1005227.ref014" target="_blank">14</a>]). It is also possible for model parameters to be unidentifiable and not sloppy (third column) or identifiable and not sloppy (fourth column). We here take <i>λ</i> ∌ 1 as the cutoff between identifiable and unidentifiable motived by arguments in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005227#pcbi.1005227.g002" target="_blank">Fig 2</a>.</p
    corecore