32 research outputs found

    A statistical framework to evaluate virtual screening

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Receiver operating characteristic (ROC) curve is widely used to evaluate virtual screening (VS) studies. However, the method fails to address the "early recognition" problem specific to VS. Although many other metrics, such as RIE, BEDROC, and pROC that emphasize "early recognition" have been proposed, there are no rigorous statistical guidelines for determining the thresholds and performing significance tests. Also no comparisons have been made between these metrics under a statistical framework to better understand their performances.</p> <p>Results</p> <p>We have proposed a statistical framework to evaluate VS studies by which the threshold to determine whether a ranking method is better than random ranking can be derived by bootstrap simulations and 2 ranking methods can be compared by permutation test. We found that different metrics emphasize "early recognition" differently. BEDROC and RIE are 2 statistically equivalent metrics. Our newly proposed metric SLR is superior to pROC. Through extensive simulations, we observed a "seesaw effect" – overemphasizing early recognition reduces the statistical power of a metric to detect true early recognitions.</p> <p>Conclusion</p> <p>The statistical framework developed and tested by us is applicable to any other metric as well, even if their exact distribution is unknown. Under this framework, a threshold can be easily selected according to a pre-specified type I error rate and statistical comparisons between 2 ranking methods becomes possible. The theoretical null distribution of SLR metric is available so that the threshold of SLR can be exactly determined without resorting to bootstrap simulations, which makes it easy to use in practical virtual screening studies.</p

    Determination of sin2 θeff w using jet charge measurements in hadronic Z decays

    Get PDF
    The electroweak mixing angle is determined with high precision from measurements of the mean difference between forward and backward hemisphere charges in hadronic decays of the Z. A data sample of 2.5 million hadronic Z decays recorded over the period 1990 to 1994 in the ALEPH detector at LEP is used. The mean charge separation between event hemispheres containing the original quark and antiquark is measured for bb̄ and cc̄ events in subsamples selected by their long lifetimes or using fast D*'s. The corresponding average charge separation for light quarks is measured in an inclusive sample from the anticorrelation between charges of opposite hemispheres and agrees with predictions of hadronisation models with a precision of 2%. It is shown that differences between light quark charge separations and the measured average can be determined using hadronisation models, with systematic uncertainties constrained by measurements of inclusive production of kaons, protons and A's. The separations are used to measure the electroweak mixing angle precisely as sin2 θeff w = 0.2322 ± 0.0008(exp. stat.) ±0.0007(exp. syst.) ± 0.0008(sep.). The first two errors are due to purely experimental sources whereas the third stems from uncertainties in the quark charge separations

    Monte Carlo Simulation of Spin Models with Long-Range Interactions

    No full text
    Abstract. An efficient Monte Carlo algorithm for the simulation of spin models with long-range interactions is discussed. Its central feature is that the number of operations required to flip a spin is independent of the number of interactions between this spin and the other spins in the system. In addition, critical slowing down is strongly suppressed. In order to illustrate the range of applicability of the algorithm, two specific examples are presented. First, some aspects of the Kosterlitz– Thouless transition in the one-dimensional Ising chain with inverse-square interactions are calculated. Secondly, the crossover from Ising-like to classical critical behavior in two-dimensional systems is studied for several different interaction profiles.

    Holistic assessment of situated cooking interactions : preliminary results of an observational study

    No full text
    This study presents the preliminary results of in situ observations of 2 cooking moments among 16 households. The aim of the study was to map the domestic cooking ecosystem from a user’s perspective and define which components of that environment influence the user’s cooking experience. Preliminary results show that contextual components and in particular, situations, shape cooking experiences in the domestic kitchen. Four opposite situational contexts, i.e., cooking for oneself or cooking for guests, cooking on a weekday or cooking during the weekend, cooking routine dishes or cooking dishes for the first time, and cooking alone or cooking together were distinguished. Situational context will influence temporal context, social context, physical context perceptions and information and task context of the cooking activity. These will in turn influence interactions with objects (i.e., ingredients, kitchen utensils), kitchen technology and their interfaces, content and other people present during the cooking activity. This study suggests that future kitchen technologies can match or enhance current practices only if designers and user researchers understand and define their situational context. This study goes beyond the state of the art, as this is the first study that aims to provide a holistic analysis of the current state of domestic cooking experiences using in-situ observations in order to inform design of future technologies. Implications for design are discussed

    Sequence-specific prediction of the efficiencies of adenine and cytosine base editors

    No full text
    © 2020, The Author(s), under exclusive licence to Springer Nature America, Inc. Base editors, including adenine base editors (ABEs)1 and cytosine base editors (CBEs)2,3, are widely used to induce point mutations. However, determining whether a specific nucleotide in its genomic context can be edited requires time-consuming experiments. Furthermore, when the editable window contains multiple target nucleotides, various genotypic products can be generated. To develop computational tools to predict base-editing efficiency and outcome product frequencies, we first evaluated the efficiencies of an ABE and a CBE and the outcome product frequencies at 13,504 and 14,157 target sequences, respectively, in human cells. We found that there were only modest asymmetric correlations between the activities of the base editors and Cas9 at the same targets. Using deep-learning-based computational modeling, we built tools to predict the efficiencies and outcome frequencies of ABE- and CBE-directed editing at any target sequence, with Pearson correlations ranging from 0.50 to 0.95. These tools and results will facilitate modeling and therapeutic correction of genetic diseases by base editing11sciescopu

    Molecular modeling and synthesis of ZINC02765569 derivatives as protein tyrosine phosphatase 1B inhibitors: lead optimization study

    No full text
    This article describes design, synthesis, and molecular modeling studies of the ZINC02765569 derivatives as potent protein tyrosine phosphatase 1B (PTP1B) inhibitors, which was previously reported as a vHTS hit (ZINC02765569) by our laboratory. Ten compounds were synthesized and characterized by IR, MASS, and NMR followed by in vitro screening for PTP1B inhibition and glucose uptake in skeletal muscle L6 myotubes. The most potent compound 3j shows 66.4 % in vitro PTP1B inhibition and 39.6 % increase in glucose uptake. Glide was used to study the nature of interactions governing binding of designed molecules with active site of the PTP1B enzyme
    corecore