1,025 research outputs found

    Evaluation of a Bayesian inference network for ligand-based virtual screening

    Get PDF
    Background Bayesian inference networks enable the computation of the probability that an event will occur. They have been used previously to rank textual documents in order of decreasing relevance to a user-defined query. Here, we modify the approach to enable a Bayesian inference network to be used for chemical similarity searching, where a database is ranked in order of decreasing probability of bioactivity. Results Bayesian inference networks were implemented using two different types of network and four different types of belief function. Experiments with the MDDR and WOMBAT databases show that a Bayesian inference network can be used to provide effective ligand-based screening, especially when the active molecules being sought have a high degree of structural homogeneity; in such cases, the network substantially out-performs a conventional, Tanimoto-based similarity searching system. However, the effectiveness of the network is much less when structurally heterogeneous sets of actives are being sought. Conclusion A Bayesian inference network provides an interesting alternative to existing tools for ligand-based virtual screening

    A quantitative approach for measuring the reservoir of latent HIV-1 proviruses.

    Get PDF
    A stable latent reservoir for HIV-1 in resting CD4+ T cells is the principal barrier to a cure1-3. Curative strategies that target the reservoir are being tested4,5 and require accurate, scalable reservoir assays. The reservoir was defined with quantitative viral outgrowth assays for cells that release infectious virus after one round of T cell activation1. However, these quantitative outgrowth assays and newer assays for cells that produce viral RNA after activation6 may underestimate the reservoir size because one round of activation does not induce all proviruses7. Many studies rely on simple assays based on polymerase chain reaction to detect proviral DNA regardless of transcriptional status, but the clinical relevance of these assays is unclear, as the vast majority of proviruses are defective7-9. Here we describe a more accurate method of measuring the HIV-1 reservoir that separately quantifies intact and defective proviruses. We show that the dynamics of cells that carry intact and defective proviruses are different in vitro and in vivo. These findings have implications for targeting the intact proviruses that are a barrier to curing HIV infection

    Evaluation of machine-learning methods for ligand-based virtual screening

    Get PDF
    Machine-learning methods can be used for virtual screening by analysing the structural characteristics of molecules of known (in)activity, and we here discuss the use of kernel discrimination and naive Bayesian classifier (NBC) methods for this purpose. We report a kernel method that allows the processing of molecules represented by binary, integer and real-valued descriptors, and show that it is little different in screening performance from a previously described kernel that had been developed specifically for the analysis of binary fingerprint representations of molecular structure. We then evaluate the performance of an NBC when the training-set contains only a very few active molecules. In such cases, a simpler approach based on group fusion would appear to provide superior screening performance, especially when structurally heterogeneous datasets are to be processed

    A relocatable ocean model in support of environmental emergencies

    Get PDF
    During the Costa Concordia emergency case, regional, subregional, and relocatable ocean models have been used together with the oil spill model, MEDSLIK-II, to provide ocean currents forecasts, possible oil spill scenarios, and drifters trajectories simulations. The models results together with the evaluation of their performances are presented in this paper. In particular, we focused this work on the implementation of the Interactive Relocatable Nested Ocean Model (IRENOM), based on the Harvard Ocean Prediction System (HOPS), for the Costa Concordia emergency and on its validation using drifters released in the area of the accident. It is shown that thanks to the capability of improving easily and quickly its configuration, the IRENOM results are of greater accuracy than the results achieved using regional or subregional model products. The model topography, and to the initialization procedures, and the horizontal resolution are the key model settings to be configured. Furthermore, the IRENOM currents and the MEDSLIK-II simulated trajectories showed to be sensitive to the spatial resolution of the meteorological fields used, providing higher prediction skills with higher resolution wind forcing.MEDESS4MS Project; TESSA Project; MyOcean2 Projectinfo:eu-repo/semantics/publishedVersio

    Construction and Random Generation of Hypergraphs with Prescribed Degree and Dimension Sequences

    Full text link
    We propose algorithms for construction and random generation of hypergraphs without loops and with prescribed degree and dimension sequences. The objective is to provide a starting point for as well as an alternative to Markov chain Monte Carlo approaches. Our algorithms leverage the transposition of properties and algorithms devised for matrices constituted of zeros and ones with prescribed row- and column-sums to hypergraphs. The construction algorithm extends the applicability of Markov chain Monte Carlo approaches when the initial hypergraph is not provided. The random generation algorithm allows the development of a self-normalised importance sampling estimator for hypergraph properties such as the average clustering coefficient.We prove the correctness of the proposed algorithms. We also prove that the random generation algorithm generates any hypergraph following the prescribed degree and dimension sequences with a non-zero probability. We empirically and comparatively evaluate the effectiveness and efficiency of the random generation algorithm. Experiments show that the random generation algorithm provides stable and accurate estimates of average clustering coefficient, and also demonstrates a better effective sample size in comparison with the Markov chain Monte Carlo approaches.Comment: 21 pages, 3 figure

    Development of a healthy biscuit: an alternative approach to biscuit manufacture

    Get PDF
    OBJECTIVE: Obesity (BMI >30) and related health problems, including coronary heart disease (CHD), is without question a public health concern. The purpose of this study was to modify a traditional biscuit by the addition of vitamin B(6), vitamin B(12), Folic Acid, Vitamin C and Prebiotic fibre, while reducing salt and sugar. DESIGN: Development and commercial manufacture of the functional biscuit was carried out in collaboration with a well known and respected biscuit manufacturer of International reputation. The raw materials traditionally referred to as essential in biscuit manufacture, i.e. sugar and fat, were targeted for removal or reduction. In addition, salt was completely removed from the recipe. PARTICIPANTS: University students of both sexes (n = 25) agreed to act as subjects for the study. Ethical approval for the study was granted by the University ethics committee. The test was conducted as a single blind crossover design, and the modified and traditional biscuits were presented to the subjects under the same experimental conditions in a random fashion. RESULTS: No difference was observed between the original and the modified product for taste and consistency (P > 0.05). The modified biscuit was acceptable to the consumer in terms of eating quality, flavour and colour. Commercial acceptability was therefore established. CONCLUSION: This study has confirmed that traditional high-fat and high-sugar biscuits which are not associated with healthy diets by most consumers can be modified to produce a healthy alternative that can be manufactured under strict commercial conditions

    Comparing multiple competing interventions in the absence of randomized trials using clinical risk-benefit analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To demonstrate the use of risk-benefit analysis for comparing multiple competing interventions in the absence of randomized trials, we applied this approach to the evaluation of five anticoagulants to prevent thrombosis in patients undergoing orthopedic surgery.</p> <p>Methods</p> <p>Using a cost-effectiveness approach from a clinical perspective (i.e. risk benefit analysis) we compared thromboprophylaxis with warfarin, low molecular weight heparin, unfractionated heparin, fondaparinux or ximelagatran in patients undergoing major orthopedic surgery, with sub-analyses according to surgery type. Proportions and variances of events defining risk (major bleeding) and benefit (thrombosis averted) were obtained through a meta-analysis and used to define beta distributions. Monte Carlo simulations were conducted and used to calculate incremental risks, benefits, and risk-benefit ratios. Finally, net clinical benefit was calculated for all replications across a range of risk-benefit acceptability thresholds, with a reference range obtained by estimating the case fatality rate - ratio of thrombosis to bleeding.</p> <p>Results</p> <p>The analysis showed that compared to placebo ximelagatran was superior to other options but final results were influenced by type of surgery, since ximelagatran was superior in total knee replacement but not in total hip replacement.</p> <p>Conclusions</p> <p>Using simulation and economic techniques we demonstrate a method that allows comparing multiple competing interventions in the absence of randomized trials with multiple arms by determining the option with the best risk-benefit profile. It can be helpful in clinical decision making since it incorporates risk, benefit, and personal risk acceptance.</p

    11th German Conference on Chemoinformatics (GCC 2015) : Fulda, Germany. 8-10 November 2015.

    Get PDF

    Spatial chemical distance based on atomic property fields

    Get PDF
    Similarity of compound chemical structures often leads to close pharmacological profiles, including binding to the same protein targets. The opposite, however, is not always true, as distinct chemical scaffolds can exhibit similar pharmacology as well. Therefore, relying on chemical similarity to known binders in search for novel chemicals targeting the same protein artificially narrows down the results and makes lead hopping impossible. In this study we attempt to design a compound similarity/distance measure that better captures structural aspects of their pharmacology and molecular interactions. The measure is based on our recently published method for compound spatial alignment with atomic property fields as a generalized 3D pharmacophoric potential. We optimized contributions of different atomic properties for better discrimination of compound pairs with the same pharmacology from those with different pharmacology using Partial Least Squares regression. Our proposed similarity measure was then tested for its ability to discriminate pharmacologically similar pairs from decoys on a large diverse dataset of 115 protein–ligand complexes. Compared to 2D Tanimoto and Shape Tanimoto approaches, our new approach led to improvement in the area under the receiver operating characteristic curve values in 66 and 58% of domains respectively. The improvement was particularly high for the previously problematic cases (weak performance of the 2D Tanimoto and Shape Tanimoto measures) with original AUC values below 0.8. In fact for these cases we obtained improvement in 86% of domains compare to 2D Tanimoto measure and 85% compare to Shape Tanimoto measure. The proposed spatial chemical distance measure can be used in virtual ligand screening
    corecore