11 research outputs found

    Reply to Comment on “Sloppy models, parameter uncertainty, and the role of experimental design"

    Get PDF
    available in PMC 2012 November 10.We welcome the commentary from Chachra, Transtrum, and Sethna1 regarding our paper “Sloppy models, parameter uncertainty, and the role of experimental design,”2 as their intriguing work shaped our thinking in this area.3 Sethna and colleagues introduced the notion of sloppy models, in which the uncertainty in the values of some combinations of parameters is many orders of magnitude greater than others.4 In our work we explored the extent to which large parameter uncertainties are an intrinsic characteristic of systems biology network models, or whether uncertainties are instead closely related to the collection of experiments used for model estimation. We were gratified to find the latter result –– that parameters are in principle knowable, which is important for the field of systems biology. The work also showed that small parameter uncertainties can be achieved and that the process can be greatly accelerated by using computational experimental design approaches5–9 deployed to select sets of experiments that effectively exercise the system in complementary directions

    Sloppy Models, Parameter Uncertainty, and the Role of Experimental Design

    Get PDF
    Computational models are increasingly used to understand and predict complex biological phenomena. These models contain many unknown parameters, at least some of which are difficult to measure directly, and instead are estimated by fitting to time-course data. Previous work has suggested that even with precise data sets, many parameters are unknowable by trajectory measurements. We examined this question in the context of a pathway model of epidermal growth factor (EGF) and neuronal growth factor (NGF) signaling. Computationally, we examined a palette of experimental perturbations that included different doses of EGF and NGF as well as single and multiple gene knockdowns and overexpressions. While no single experiment could accurately estimate all of the parameters, experimental design methodology identified a set of five complementary experiments that could. These results suggest optimism for the prospects for calibrating even large models, that the success of parameter estimation is intimately linked to the experimental perturbations used, and that experimental design methodology is important for parameter fitting of biological models and likely for the accuracy that can be expected from them.National Institutes of Health (U.S.) (U54 CA112967)MIT-Portugal ProgramSingapore-MIT Alliance for Research and Technolog

    Federated learning enables big data for rare cancer boundary detection.

    Get PDF
    Although machine learning (ML) has shown promise across disciplines, out-of-sample generalizability is concerning. This is currently addressed by sharing multi-site data, but such centralization is challenging/infeasible to scale due to various limitations. Federated ML (FL) provides an alternative paradigm for accurate and generalizable ML, by only sharing numerical model updates. Here we present the largest FL study to-date, involving data from 71 sites across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, reporting the largest such dataset in the literature (n = 6, 314). We demonstrate a 33% delineation improvement for the surgically targetable tumor, and 23% for the complete tumor extent, over a publicly trained model. We anticipate our study to: 1) enable more healthcare studies informed by large diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further analyses for glioblastoma by releasing our consensus model, and 3) demonstrate the FL effectiveness at such scale and task-complexity as a paradigm shift for multi-site collaborations, alleviating the need for data-sharing

    Author Correction: Federated learning enables big data for rare cancer boundary detection.

    Get PDF
    10.1038/s41467-023-36188-7NATURE COMMUNICATIONS14

    Federated Learning Enables Big Data for Rare Cancer Boundary Detection

    Get PDF
    Although machine learning (ML) has shown promise across disciplines, out-of-sample generalizability is concerning. This is currently addressed by sharing multi-site data, but such centralization is challenging/infeasible to scale due to various limitations. Federated ML (FL) provides an alternative paradigm for accurate and generalizable ML, by only sharing numerical model updates. Here we present the largest FL study to-date, involving data from 71 sites across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, reporting the largest such dataset in the literature (n = 6, 314). We demonstrate a 33% delineation improvement for the surgically targetable tumor, and 23% for the complete tumor extent, over a publicly trained model. We anticipate our study to: 1) enable more healthcare studies informed by large diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further analyses for glioblastoma by releasing our consensus model, and 3) demonstrate the FL effectiveness at such scale and task-complexity as a paradigm shift for multi-site collaborations, alleviating the need for data-sharing

    Mechanistic PK/PD modeling to address early-stage biotherapeutic dosing feasibility questions

    No full text
    ABSTRACTEarly assessment of dosing requirements should be an integral part of developability assessments for a discovery program. If a very high dose is required to achieve the desired pharmacological effect, it may not be clinically feasible or commercially desirable to develop the biotherapeutic for the selected target unless extra measures are taken to develop a high concentration formulation or maximize yield during manufacturing. A quantitative understanding of the impact of target selection, biotherapeutic format, and optimal drug properties on potential dosing requirements to achieve efficacy can affect many early decisions. Early prediction of dosing requirements for biotherapeutics, as opposed to small molecules, is possible due to a strong influence of target biology on pharmacokinetics and dosing. Mechanistic pharmacokinetic/pharmacodynamic (PK/PD) models leverage knowledge and competitor data available at an early stage of drug development, including biophysics of the target(s) and disease physiology, to rationally inform drug design criteria. Here we review how mathematical mechanistic PK/PD modeling can and has been applied to guide early drug development decisions

    Stimulus design for model selection and validation in cell signaling.

    Get PDF
    Mechanism-based chemical kinetic models are increasingly being used to describe biological signaling. Such models serve to encapsulate current understanding of pathways and to enable insight into complex biological processes. One challenge in model development is that, with limited experimental data, multiple models can be consistent with known mechanisms and existing data. Here, we address the problem of model ambiguity by providing a method for designing dynamic stimuli that, in stimulus-response experiments, distinguish among parameterized models with different topologies, i.e., reaction mechanisms, in which only some of the species can be measured. We develop the approach by presenting two formulations of a model-based controller that is used to design the dynamic stimulus. In both formulations, an input signal is designed for each candidate model and parameterization so as to drive the model outputs through a target trajectory. The quality of a model is then assessed by the ability of the corresponding controller, informed by that model, to drive the experimental system. We evaluated our method on models of antibody-ligand binding, mitogen-activated protein kinase (MAPK) phosphorylation and de-phosphorylation, and larger models of the epidermal growth factor receptor (EGFR) pathway. For each of these systems, the controller informed by the correct model is the most successful at designing a stimulus to produce the desired behavior. Using these stimuli we were able to distinguish between models with subtle mechanistic differences or where input and outputs were multiple reactions removed from the model differences. An advantage of this method of model discrimination is that it does not require novel reagents, or altered measurement techniques; the only change to the experiment is the time course of stimulation. Taken together, these results provide a strong basis for using designed input stimuli as a tool for the development of cell signaling models

    Systematic in silico analysis of clinically tested drugs for reducing amyloid‐beta plaque accumulation in Alzheimer's disease

    No full text
    INTRODUCTION: Despite strong evidence linking amyloid beta (Aβ) to Alzheimer's disease, most clinical trials have shown no clinical efficacy for reasons that remain unclear. To understand why, we developed a quantitative systems pharmacology (QSP) model for seven therapeutics: aducanumab, crenezumab, solanezumab, bapineuzumab, elenbecestat, verubecestat, and semagacestat. METHODS: Ordinary differential equations were used to model the production, transport, and aggregation of Aβ; pharmacology of the drugs; and their impact on plaque. RESULTS: The calibrated model predicts that endogenous plaque turnover is slow, with an estimated half-life of 2.75 years. This is likely why beta-secretase inhibitors have a smaller effect on plaque reduction. Of the mechanisms tested, the model predicts binding to plaque and inducing antibody-dependent cellular phagocytosis is the best approach for plaque reduction. DISCUSSION: A QSP model can provide novel insights to clinical results. Our model explains the results of clinical trials and provides guidance for future therapeutic development
    corecore