380,922 research outputs found

    Context sensitivity in individual-based modeling

    Get PDF

    Dinosolve: A Protein Disulfide Bonding Prediction Server Using Context-Based Features to Enhance Prediction Accuracy

    Get PDF
    Background: Disulfide bonds play an important role in protein folding and structure stability. Accurately predicting disulfide bonds from protein sequences is important for modeling the structural and functional characteristics of many proteins. Methods: In this work, we introduce an approach of enhancing disulfide bonding prediction accuracy by taking advantage of context-based features. We firstly derive the first-order and second-order mean-force potentials according to the amino acid environment around the cysteine residues from large number of cysteine samples. The mean-force potentials are integrated as context-based scores to estimate the favorability of a cysteine residue in disulfide bonding state as well as a cysteine pair in disulfide bond connectivity. These context-based scores are then incorporated as features together with other sequence and evolutionary information to train neural networks for disulfide bonding state prediction and connectivity prediction. Results: The 10-fold cross validated accuracy is 90.8% at residue-level and 85.6% at protein-level in classifying an individual cysteine residue as bonded or free, which is around 2% accuracy improvement. The average accuracy for disulfide bonding connectivity prediction is also improved, which yields overall sensitivity of 73.42% and specificity of 91.61%. Conclusions: Our computational results have shown that the context-based scores are effective features to enhance the prediction accuracies of both disulfide bonding state prediction and connectivity prediction. Our disulfide prediction algorithm is implemented on a web server named Dinosolve available at: http://hpcr.cs.odu.edu/dinosolve

    A combined sensitivity analysis and kriging surrogate modeling for early validation of health indicators

    Get PDF
    To increase the dependability of complex systems, one solution is to assess their state of health continuously through the monitoring of variables sensitive to potential degradation modes. When computed in an operating environment, these variables, known as health indicators, are subject to many uncertainties. Hence, the stochastic nature of health assessment combined with the lack of data in design stages makes it difficult to evaluate the efficiency of a health indicator before the system enters into service. This paper introduces a method for early validation of health indicators during the design stages of a system development process. This method uses physics-based modeling and uncertainties propagation to create simulated stochastic data. However, because of the large number of parameters defining the model and its computation duration, the necessary runtime for uncertainties propagation is prohibitive. Thus, kriging is used to obtain low computation time estimations of the model outputs. Moreover, sensitivity analysis techniques are performed upstream to determine the hierarchization of the model parameters and to reduce the dimension of the input space. The validation is based on three types of numerical key performance indicators corresponding to the detection, identification and prognostic processes. After having introduced and formalized the framework of uncertain systems modeling and the different performance metrics, the issues of sensitivity analysis and surrogate modeling are addressed. The method is subsequently applied to the validation of a set of health indicators for the monitoring of an aircraft engine's pumping unit

    Modeling good research practices - overview: a report of the ISPOR-SMDM modeling good research practices task force - 1.

    Get PDF
    Models—mathematical frameworks that facilitate estimation of the consequences of health care decisions—have become essential tools for health technology assessment. Evolution of the methods since the first ISPOR modeling task force reported in 2003 has led to a new task force, jointly convened with the Society for Medical Decision Making, and this series of seven papers presents the updated recommendations for best practices in conceptualizing models; implementing state–transition approaches, discrete event simulations, or dynamic transmission models; dealing with uncertainty; and validating and reporting models transparently. This overview introduces the work of the task force, provides all the recommendations, and discusses some quandaries that require further elucidation. The audience for these papers includes those who build models, stakeholders who utilize their results, and, indeed, anyone concerned with the use of models to support decision making

    The case for absolute ligand discrimination : modeling information processing and decision by immune T cells

    Get PDF
    Some cells have to take decision based on the quality of surroundings ligands, almost irrespective of their quantity, a problem we name "absolute discrimination". An example of absolute discrimination is recognition of not-self by immune T Cells. We show how the problem of absolute discrimination can be solved by a process called "adaptive sorting". We review several implementations of adaptive sorting, as well as its generic properties such as antagonism. We show how kinetic proofreading with negative feedback implements an approximate version of adaptive sorting in the immune context. Finally, we revisit the decision problem at the cell population level, showing how phenotypic variability and feedbacks between population and single cells are crucial for proper decision

    A bi-dimensional finite mixture model for longitudinal data subject to dropout

    Full text link
    In longitudinal studies, subjects may be lost to follow-up, or miss some of the planned visits, leading to incomplete response sequences. When the probability of non-response, conditional on the available covariates and the observed responses, still depends on unobserved outcomes, the dropout mechanism is said to be non ignorable. A common objective is to build a reliable association structure to account for dependence between the longitudinal and the dropout processes. Starting from the existing literature, we introduce a random coefficient based dropout model where the association between outcomes is modeled through discrete latent effects. These effects are outcome-specific and account for heterogeneity in the univariate profiles. Dependence between profiles is introduced by using a bi-dimensional representation for the corresponding distribution. In this way, we define a flexible latent class structure which allows to efficiently describe both dependence within the two margins of interest and dependence between them. By using this representation we show that, unlike standard (unidimensional) finite mixture models, the non ignorable dropout model properly nests its ignorable counterpart. We detail the proposed modeling approach by analyzing data from a longitudinal study on the dynamics of cognitive functioning in the elderly. Further, the effects of assumptions about non ignorability of the dropout process on model parameter estimates are (locally) investigated using the index of (local) sensitivity to non-ignorability

    A Workflow for Software Development within Computational Epidemiology

    Get PDF
    A critical investigation into computational models developed for studying the spread of communicable disease is presented. The case in point is a spatially explicit micro-meso-macro model for the entire Swedish population built on registry data, thus far used for smallpox and for influenza-like illnesses. The lessons learned from a software development project of more than 100 person months are collected into a check list. The list is intended for use by computational epidemiologists and policy makers, and the workflow incorporating these two roles is described in detail.NOTICE: This is the author’s version of a work that was accepted for publication in Journal of Computationa Science. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Journal of Computational Science, VOL 2, ISSUE 3, 6 June 2011 DOI 10.1016/j.jocs.2011.05.004.</p
    corecore