2,267 research outputs found

    Simulating Soil Organic Matter Transformations with the New Implementation of the Daisy Model

    Get PDF
    Daisy is a well-tested deterministic, dynamic soil-plant-atmosphere model, capable of simulating water balance, nitrogen balance and losses, development in soil organic matter and crop growth and production in crop rotations under alternate management strategies. Originally it was developed as a system of single models describing each process involved, but recently it has been developed into a framework, which can be used for implementation of several different models of each of the different processes. Thus, for example a number of different models for simulating soil water dynamics can be chosen depending on the purpose of the simulation and the availability of data for parameterisation. The sub-model simulating soil organic matter is still a fixed component in the Daisy terminology. This means that there is currently only one model, which can be used to simulate soil organic matter transformations. However this sub-model can be changed considerably. Some examples are given

    SQUIRRELnovo : de novo design of a PPARalpha agonist by bioisosteric replacement

    Get PDF
    Shape complementarity is a compulsory condition for molecular recognition. In our 3D ligand-based virtual screening approach called SQUIRREL, we combine shape-based rigid body alignment with fuzzy pharmacophore scoring. Retrospective validation studies demonstrate the superiority of methods which combine both shape and pharmacophore information on the family of peroxisome proliferator-activated receptors (PPARs). We demonstrate the real-life applicability of SQUIRREL by a prospective virtual screening study, where a potent PPARalpha agonist with an EC50 of 44 nM and 100-fold selectivity against PPARgamma has been identified..

    Sensor data management with probabilistic models

    Get PDF
    The anticipated ‘sensing environments’ of the near future pose new requirements to the data management systems that mediate between sensor data supply and demand sides. We identify and investigate one of them: the need to deal with the inherent uncertainty in sensor data due to measurement noise, missing data, the semantic gap between the measured data and relevant information, and the integration of data from different sensors.\ud \ud Probabilistic models of sensor data can be used to deal with these uncertainties in the well-understood and fruitful framework of probability theory. In particular, the Bayesian network formalism proves useful for modeling sensor data in a flexible environment, because its comprehensiveness and modularity. We provide extensive technical argumentation for this claim. As a demonstration case, we define a discrete Bayesian network for location tracking using Bluetooth transceivers.\ud \ud In order to scale up sensor models, efficient probabilistic inference on the Bayesian network is crucial. However, we observe that the conventional inference methods do not scale well for our demonstration case. We propose several optimizations, making it possible to jointly scale up the number of locations and sensors in sublinear time, and to scale up the time resolution in linear time. Moreover, we define a theoretical framework in which these optimizations are derived by translating an inference query into relational algebra. This allows the query to be analyzed and optimized using insights and techniques from the database community; for example, using cost metrics based on cardinality rather than dimensionality.\ud \ud An orthogonal research question investigates the possibility of collecting transition statistics in a local, clustered fashion, in which transitions between states of different clusters cannot be directly observed. We show that this problem can be written as a constrained system of linear equations, for which we describe a specialized solution method

    Employment Seeking Under Consideration of Social Capital on Social Network Sites

    Get PDF
    This paper presents a model to measure the social capital of individuals seeking employment. The paper will explain the dimensions, which are used to measure social capital and gives an overview about the influence of social capital for the employment search process. Further define the paper the population for the research and will explain the hypotheses to evaluate the existence of social capital on social network sites. The interest for the scientific community is the existence of social capital in social networking sites and to explain, along the social capital theory the behavior of people on social network sites if possible. This paper describes the construct for measuring social capital with relation to the employment search process. Furthermore the paper describes the channels through which to identify available employment opportunities and explain the changes of social capital under the consideration of social network sites and social media. The paper is the theoretical basis to test the functionality of social capital in social networking sites

    PainDroid: An android-based virtual reality application for pain assessment

    Get PDF
    Earlier studies in the field of pain research suggest that little efficient intervention currently exists in response to the exponential increase in the prevalence of pain. In this paper, we present an Android application (PainDroid) with multimodal functionality that could be enhanced with Virtual Reality (VR) technology, which has been designed for the purpose of improving the assessment of this notoriously difficult medical concern. Pain- Droid has been evaluated for its usability and acceptability with a pilot group of potential users and clinicians, with initial results suggesting that it can be an effective and usable tool for improving the assessment of pain. Participant experiences indicated that the application was easy to use and the potential of the application was similarly appreciated by the clinicians involved in the evaluation. Our findings may be of considerable interest to healthcare providers, policy makers, and other parties that might be actively involved in the area of pain and VR research

    Appointing Women to Boards: Is There a Cultural Bias?

    Get PDF
    Companies that are serious about corporate governance and business ethics are turning their attention to gender diversity at the most senior levels of business (Institute of Business Ethics, Business Ethics Briefing 21:1, 2011). Board gender diversity has been the subject of several studies carried out by international organizations such as Catalyst (Increasing gender diversity on boards: Current index of formal approaches, 2012), the World Economic Forum (Hausmann et al., The global gender gap report, 2010), and the European Board Diversity Analysis (Is it getting easier to find women on European boards? 2010). They all lead to reports confirming the overall relatively low proportion of women on boards and the slow pace at which more women are being appointed. Furthermore, the proportion of women on corporate boards varies much across countries. Based on institutional theory, this study hypothesizes and tests whether this variation can be attributed to differences in cultural settings across countries. Our analysis of the representation of women on boards for 32 countries during 2010 reveals that two cultural characteristics are indeed associated with the observed differences. We use the cultural dimensions proposed by Hofstede (Culture’s consequences: International differences in work-related values, 1980) to measure this construct. Results show that countries which have the greatest tolerance for inequalities in the distribution of power and those that tend to value the role of men generally exhibit lower representations of women on boards

    Prediction error and accuracy of intraocular lens power calculation in pediatric patient comparing SRK II and Pediatric IOL Calculator

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Despite growing number of intraocular lens power calculation formulas, there is no evidence that these formulas have good predictive accuracy in pediatric, whose eyes are still undergoing rapid growth and refractive changes. This study is intended to compare the prediction error and the accuracy of predictability of intraocular lens power calculation in pediatric patients at 3 month post cataract surgery with primary implantation of an intraocular lens using SRK II versus Pediatric IOL Calculator for pediatric intraocular lens calculation. Pediatric IOL Calculator is a modification of SRK II using Holladay algorithm. This program attempts to predict the refraction of a pseudophakic child as he grows, using a Holladay algorithm model. This model is based on refraction measurements of pediatric aphakic eyes. Pediatric IOL Calculator uses computer software for intraocular lens calculation.</p> <p>Methods</p> <p>This comparative study consists of 31 eyes (24 patients) that successfully underwent cataract surgery and intraocular lens implantations. All patients were 12 years old and below (range: 4 months to 12 years old). Patients were randomized into 2 groups; SRK II group and Pediatric IOL Calculator group using envelope technique sampling procedure. Intraocular lens power calculations were made using either SRK II or Pediatric IOL Calculator for pediatric intraocular lens calculation based on the printed technique selected for every patient. Thirteen patients were assigned for SRK II group and another 11 patients for Pediatric IOL Calculator group. For SRK II group, the predicted postoperative refraction is based on the patient's axial length and is aimed for emmetropic at the time of surgery. However for Pediatric IOL Calculator group, the predicted postoperative refraction is aimed for emmetropic spherical equivalent at age 2 years old. The postoperative refractive outcome was taken as the spherical equivalent of the refraction at 3 month postoperative follow-up. The data were analysed to compare the mean prediction error and the accuracy of predictability of intraocular lens power calculation between SRK II and Pediatric IOL Calculator.</p> <p>Results</p> <p>There were 16 eyes in SRK II group and 15 eyes in Pediatric IOL Calculator group. The mean prediction error in the SRK II group was 1.03 D (SD, 0.69 D) while in Pediatric IOL Calculator group was 1.14 D (SD, 1.19 D). The SRK II group showed lower prediction error of 0.11 D compared to Pediatric IOL Calculator group, but this was not statistically significant (p = 0.74). There were 3 eyes (18.75%) in SRK II group achieved acccurate predictability where the refraction postoperatively was within ± 0.5 D from predicted refraction compared to 7 eyes (46.67%) in the Pediatric IOL Calculator group. However the difference of the accuracy of predictability of postoperative refraction between the two formulas was also not statistically significant (p = 0.097).</p> <p>Conclusions</p> <p>The prediction error and the accuracy of predictability of postoperative refraction in pediatric cataract surgery are comparable between SRK II and Pediatric IOL Calculator. The existence of the Pediatric IOL Calculator provides an alternative to the ophthalmologist for intraocular lens calculation in pediatric patients. Relatively small sample size and unequal distribution of patients especially the younger children (less than 3 years) with a short time follow-up (3 months), considering spherical equivalent only.</p

    Predicting cancer involvement of genes from heterogeneous data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Systematic approaches for identifying proteins involved in different types of cancer are needed. Experimental techniques such as microarrays are being used to characterize cancer, but validating their results can be a laborious task. Computational approaches are used to prioritize between genes putatively involved in cancer, usually based on further analyzing experimental data.</p> <p>Results</p> <p>We implemented a systematic method using the PIANA software that predicts cancer involvement of genes by integrating heterogeneous datasets. Specifically, we produced lists of genes likely to be involved in cancer by relying on: (i) protein-protein interactions; (ii) differential expression data; and (iii) structural and functional properties of cancer genes. The integrative approach that combines multiple sources of data obtained positive predictive values ranging from 23% (on a list of 811 genes) to 73% (on a list of 22 genes), outperforming the use of any of the data sources alone. We analyze a list of 20 cancer gene predictions, finding that most of them have been recently linked to cancer in literature.</p> <p>Conclusion</p> <p>Our approach to identifying and prioritizing candidate cancer genes can be used to produce lists of genes likely to be involved in cancer. Our results suggest that differential expression studies yielding high numbers of candidate cancer genes can be filtered using protein interaction networks. </p

    Accurate and efficient gp120 V3 loop structure based models for the determination of HIV-1 co-receptor usage

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>HIV-1 targets human cells expressing both the CD4 receptor, which binds the viral envelope glycoprotein gp120, as well as either the CCR5 (R5) or CXCR4 (X4) co-receptors, which interact primarily with the third hypervariable loop (V3 loop) of gp120. Determination of HIV-1 affinity for either the R5 or X4 co-receptor on host cells facilitates the inclusion of co-receptor antagonists as a part of patient treatment strategies. A dataset of 1193 distinct gp120 V3 loop peptide sequences (989 R5-utilizing, 204 X4-capable) is utilized to train predictive classifiers based on implementations of random forest, support vector machine, boosted decision tree, and neural network machine learning algorithms. An <it>in silico </it>mutagenesis procedure employing multibody statistical potentials, computational geometry, and threading of variant V3 sequences onto an experimental structure, is used to generate a feature vector representation for each variant whose components measure environmental perturbations at corresponding structural positions.</p> <p>Results</p> <p>Classifier performance is evaluated based on stratified 10-fold cross-validation, stratified dataset splits (2/3 training, 1/3 validation), and leave-one-out cross-validation. Best reported values of sensitivity (85%), specificity (100%), and precision (98%) for predicting X4-capable HIV-1 virus, overall accuracy (97%), Matthew's correlation coefficient (89%), balanced error rate (0.08), and ROC area (0.97) all reach critical thresholds, suggesting that the models outperform six other state-of-the-art methods and come closer to competing with phenotype assays.</p> <p>Conclusions</p> <p>The trained classifiers provide instantaneous and reliable predictions regarding HIV-1 co-receptor usage, requiring only translated V3 loop genotypes as input. Furthermore, the novelty of these computational mutagenesis based predictor attributes distinguishes the models as orthogonal and complementary to previous methods that utilize sequence, structure, and/or evolutionary information. The classifiers are available online at <url>http://proteins.gmu.edu/automute</url>.</p
    corecore