48 research outputs found

    Blinded predictions of standard binding free energies: lessons learned from the SAMPL6 challenge

    Get PDF
    <p>In the context of the SAMPL6 challenges, series of blinded predictions of standard binding free energies were made with the SOMD software for a dataset of 27 host-guest systems featuring two octa-acids hosts (<i>OA </i>and <i>TEMOA) </i>and a cucuribituril ring (<i>CB</i>8)<i> </i>host. Three different models were used, <i>ModelA </i>computes the free energy of binding based on a double annihilation technique; <i>ModelB</i> additionally takes into account long-range dispersion and standard state corrections; <i>ModelC</i> additionally introduces an empirical correction term derived from a regression analysis of SAMPL5 predictions previously made with SOMD. The performance of each model was evaluated with two different setups; <i>buffer </i>explicitly matches the ionic strength from the binding assays, whereas <i>no-buffer</i> merely neutralizes the host-guest net charge with counter-ions. <i>ModelC/no-buffer</i> shows the lowest mean-unsigned error for the overall dataset (MUE 1.29 < 1.39 < 1.50 kcal mol<sup>-1</sup>, 95% CI), while explicit modelling of the buffer improves significantly results for the CB8 host only. Correlation with experimental data ranges from excellent for the host TEMOA (R<sup>2</sup> 0.91 < 0.94 < 0.96), to poor for <i>CB8 </i>(R<sup>2</sup> 0.04 < 0.12 < 0.23). Further investigations indicate a pronounced dependence of the binding free energies on the modelled ionic strength, and variable reproducibility of the binding free energies between different simulation packages. </p

    Evaluating parameterization protocols for hydration free energy calculations with the AMOEBA polarizable force field

    No full text
    Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed point charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard, but showed substantially worse results than those using the fixed point charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 dataset, we evaluate the cumulative effects of a series of incremental improvements in parameterization protocol, including both solute and solvent model changes. Ultimately the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein ligand binding studie

    Best practices for constructing, preparing, and evaluating protein-ligand binding affinity benchmarks

    Full text link
    Free energy calculations are rapidly becoming indispensable in structure-enabled drug discovery programs. As new methods, force fields, and implementations are developed, assessing their expected accuracy on real-world systems (benchmarking) becomes critical to provide users with an assessment of the accuracy expected when these methods are applied within their domain of applicability, and developers with a way to assess the expected impact of new methodologies. These assessments require construction of a benchmark - a set of well-prepared, high quality systems with corresponding experimental measurements designed to ensure the resulting calculations provide a realistic assessment of expected performance when these methods are deployed within their domains of applicability. To date, the community has not yet adopted a common standardized benchmark, and existing benchmark reports suffer from a myriad of issues, including poor data quality, limited statistical power, and statistically deficient analyses, all of which can conspire to produce benchmarks that are poorly predictive of real-world performance. Here, we address these issues by presenting guidelines for (1) curating experimental data to develop meaningful benchmark sets, (2) preparing benchmark inputs according to best practices to facilitate widespread adoption, and (3) analysis of the resulting predictions to enable statistically meaningful comparisons among methods and force fields
    corecore