87 research outputs found

    Pilot study on developing a decision support tool for guiding re-administration of chemotherapeutic agent after a serious adverse drug reaction

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Currently, there are no standard guidelines for recommending re-administration of a chemotherapeutic drug to a patient after a serious adverse drug reaction (ADR) incident. The decision on whether to rechallenge the patient is based on the experience of the clinician and is highly subjective. Thus the aim of this study is to develop a decision support tool to assist clinicians in this decision making process.</p> <p>Methods</p> <p>The inclusion criteria for patients in this study are: (1) had chemotherapy at National Cancer Centre Singapore between 2004 to 2009, (2) suffered from serious ADRs, and (3) were rechallenged. A total of 46 patients fulfilled the inclusion criteria. A genetic algorithm attribute selection method was used to identify clinical predictors for patients' rechallenge status. A Naïve Bayes model was then developed using 35 patients and externally validated using 11 patients.</p> <p>Results</p> <p>Eight patient attributes (age, chemotherapeutic drug, albumin level, red blood cell level, platelet level, abnormal white blood cell level, abnormal alkaline phosphatase level and abnormal alanine aminotransferase level) were identified as clinical predictors for rechallenge status of patients. The Naïve Bayes model had an AUC of 0.767 and was found to be useful for assisting clinical decision making after clinicians had identified a group of patients for rechallenge. A platform independent version and an online version of the model is available to facilitate independent validation of the model.</p> <p>Conclusion</p> <p>Due to the limited size of the validation set, a more extensive validation of the model is necessary before it can be adopted for routine clinical use. Once validated, the model can be used to assist clinicians in deciding whether to rechallenge patients by determining if their initial assessment of rechallenge status of patients is accurate.</p

    Detection of subclinical keratoconus using biometric parameters

    Get PDF
    The validation of innovative methodologies for diagnosing keratoconus in its earliest stages is of major interest in ophthalmology. So far, subclinical keratoconus diagnosis has been made by combining several clinical criteria that allowed the definition of indices and decision trees, which proved to be valuable diagnostic tools. However, further improvements need to be made in order to reduce the risk of ectasia in patients who undergo corneal refractive surgery. The purpose of this work is to report a new subclinical keratoconus detection method based in the analysis of certain biometric parameters extracted from a custom 3D corneal model. This retrospective study includes two groups: the first composed of 67 patients with healthy eyes and normal vision, and the second composed of 24 patients with subclinical keratoconus and normal vision as well. The proposed detection method generates a 3D custom corneal model using computer-aided graphic design (CAGD) tools and corneal surfaces’ data provided by a corneal tomographer. Defined bio-geometric parameters are then derived from the model, and statistically analysed to detect any minimal corneal deformation. The metric which showed the highest area under the receiver-operator curve (ROC) was the posterior apex deviation. This new method detected differences between healthy and sub-clinical keratoconus corneas by using abnormal corneal topography and normal spectacle corrected vision, enabling an integrated tool that facilitates an easier diagnosis and follow-up of keratoconus.This publication has been carried out in the framework of the Thematic Network for Co-Operative Research in Health (RETICS) reference number RD16/0008/0012 financed by the Carlos III Health Institute-General Subdirection of Networks and Cooperative Investigation Centers (R&D&I National Plan 2013–2016) and the European Regional Development Fund (FEDER)

    Automated systems to identify relevant documents in product risk management

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Product risk management involves critical assessment of the risks and benefits of health products circulating in the market. One of the important sources of safety information is the primary literature, especially for newer products which regulatory authorities have relatively little experience with. Although the primary literature provides vast and diverse information, only a small proportion of which is useful for product risk assessment work. Hence, the aim of this study is to explore the possibility of using text mining to automate the identification of useful articles, which will reduce the time taken for literature search and hence improving work efficiency. In this study, term-frequency inverse document-frequency values were computed for predictors extracted from the titles and abstracts of articles related to three tumour necrosis factors-alpha blockers. A general automated system was developed using only general predictors and was tested for its generalizability using articles related to four other drug classes. Several specific automated systems were developed using both general and specific predictors and training sets of different sizes in order to determine the minimum number of articles required for developing such systems.</p> <p>Results</p> <p>The general automated system had an area under the curve value of 0.731 and was able to rank 34.6% and 46.2% of the total number of 'useful' articles among the first 10% and 20% of the articles presented to the evaluators when tested on the generalizability set. However, its use may be limited by the subjective definition of useful articles. For the specific automated system, it was found that only 20 articles were required to develop a specific automated system with a prediction performance (AUC 0.748) that was better than that of general automated system.</p> <p>Conclusions</p> <p>Specific automated systems can be developed rapidly and avoid problems caused by subjective definition of useful articles. Thus the efficiency of product risk management can be improved with the use of specific automated systems.</p

    Prediction of Preterm Deliveries from EHG Signals Using Machine Learning

    Get PDF
    There has been some improvement in the treatment of preterm infants, which has helped to increase their chance of survival. However, the rate of premature births is still globally increasing. As a result, this group of infants are most at risk of developing severe medical conditions that can affect the respiratory, gastrointestinal, immune, central nervous, auditory and visual systems. In extreme cases, this can also lead to long-term conditions, such as cerebral palsy, mental retardation, learning difficulties, including poor health and growth. In the US alone, the societal and economic cost of preterm births, in 2005, was estimated to be $26.2 billion, per annum. In the UK, this value was close to £2.95 billion, in 2009. Many believe that a better understanding of why preterm births occur, and a strategic focus on prevention, will help to improve the health of children and reduce healthcare costs. At present, most methods of preterm birth prediction are subjective. However, a strong body of evidence suggests the analysis of uterine electrical signals (Electrohysterography), could provide a viable way of diagnosing true labour and predict preterm deliveries. Most Electrohysterography studies focus on true labour detection during the final seven days, before labour. The challenge is to utilise Electrohysterography techniques to predict preterm delivery earlier in the pregnancy. This paper explores this idea further and presents a supervised machine learning approach that classifies term and preterm records, using an open source dataset containing 300 records (38 preterm and 262 term). The synthetic minority oversampling technique is used to oversample the minority preterm class, and cross validation techniques, are used to evaluate the dataset against other similar studies. Our approach shows an improvement on existing studies with 96% sensitivity, 90% specificity, and a 95% area under the curve value with 8% global error using the polynomial classifier

    Exploiting Amino Acid Composition for Predicting Protein-Protein Interactions

    Get PDF
    Computational prediction of protein interactions typically use protein domains as classifier features because they capture conserved information of interaction surfaces. However, approaches relying on domains as features cannot be applied to proteins without any domain information. In this paper, we explore the contribution of pure amino acid composition (AAC) for protein interaction prediction. This simple feature, which is based on normalized counts of single or pairs of amino acids, is applicable to proteins from any sequenced organism and can be used to compensate for the lack of domain information.AAC performed at par with protein interaction prediction based on domains on three yeast protein interaction datasets. Similar behavior was obtained using different classifiers, indicating that our results are a function of features and not of classifiers. In addition to yeast datasets, AAC performed comparably on worm and fly datasets. Prediction of interactions for the entire yeast proteome identified a large number of novel interactions, the majority of which co-localized or participated in the same processes. Our high confidence interaction network included both well-studied and uncharacterized proteins. Proteins with known function were involved in actin assembly and cell budding. Uncharacterized proteins interacted with proteins involved in reproduction and cell budding, thus providing putative biological roles for the uncharacterized proteins.AAC is a simple, yet powerful feature for predicting protein interactions, and can be used alone or in conjunction with protein domains to predict new and validate existing interactions. More importantly, AAC alone performs at par with existing, but more complex, features indicating the presence of sequence-level information that is predictive of interaction, but which is not necessarily restricted to domains

    Genome Wide Association Study to predict severe asthma exacerbations in children using random forests classifiers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Personalized health-care promises tailored health-care solutions to individual patients based on their genetic background and/or environmental exposure history. To date, disease prediction has been based on a few environmental factors and/or single nucleotide polymorphisms (SNPs), while complex diseases are usually affected by many genetic and environmental factors with each factor contributing a small portion to the outcome. We hypothesized that the use of random forests classifiers to select SNPs would result in an improved predictive model of asthma exacerbations. We tested this hypothesis in a population of childhood asthmatics.</p> <p>Methods</p> <p>In this study, using emergency room visits or hospitalizations as the definition of a severe asthma exacerbation, we first identified a list of top Genome Wide Association Study (GWAS) SNPs ranked by Random Forests (RF) importance score for the CAMP (Childhood Asthma Management Program) population of 127 exacerbation cases and 290 non-exacerbation controls. We predict severe asthma exacerbations using the top 10 to 320 SNPs together with age, sex, pre-bronchodilator FEV1 percentage predicted, and treatment group.</p> <p>Results</p> <p>Testing in an independent set of the CAMP population shows that severe asthma exacerbations can be predicted with an Area Under the Curve (AUC) = 0.66 with 160-320 SNPs in comparison to an AUC score of 0.57 with 10 SNPs. Using the clinical traits alone yielded AUC score of 0.54, suggesting the phenotype is affected by genetic as well as environmental factors.</p> <p>Conclusions</p> <p>Our study shows that a random forests algorithm can effectively extract and use the information contained in a small number of samples. Random forests, and other machine learning tools, can be used with GWAS studies to integrate large numbers of predictors simultaneously.</p

    Metabolomics-Based Discovery of Diagnostic Biomarkers for Onchocerciasis

    Get PDF
    Onchocerciasis, caused by the filarial parasite Onchocerca volvulus, afflicts millions of people, causing such debilitating symptoms as blindness and acute dermatitis. There are no accurate, sensitive means of diagnosing O. volvulus infection. Clinical diagnostics are desperately needed in order to achieve the goals of controlling and eliminating onchocerciasis and neglected tropical diseases in general. In this study, a metabolomics approach is introduced for the discovery of small molecule biomarkers that can be used to diagnose O. volvulus infection. Blood samples from O. volvulus infected and uninfected individuals from different geographic regions were compared using liquid chromatography separation and mass spectrometry identification. Thousands of chromatographic mass features were statistically compared to discover 14 mass features that were significantly different between infected and uninfected individuals. Multivariate statistical analysis and machine learning algorithms demonstrated how these biomarkers could be used to differentiate between infected and uninfected individuals and indicate that the diagnostic may even be sensitive enough to assess the viability of worms. This study suggests a future potential of these biomarkers for use in a field-based onchocerciasis diagnostic and how such an approach could be expanded for the development of diagnostics for other neglected tropical diseases

    Routinely collected data for randomized trials: promises, barriers, and implications

    Get PDF
    This work was supported by Stiftung Institut für klinische Epidemiologie. The Meta-Research Innovation Center at Stanford University is funded by a grant from the Laura and John Arnold Foundation. The funders had no role in design and conduct of the study; the collection, management, analysis, or interpretation of the data; or the preparation, review, or approval of the manuscript or its submission for publication.Peer reviewedPublisher PD

    A genomic biomarker signature can predict skin sensitizers using a cell-based in vitro alternative to animal tests

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Allergic contact dermatitis is an inflammatory skin disease that affects a significant proportion of the population. This disease is caused by an adverse immune response towards chemical haptens, and leads to a substantial economic burden for society. Current test of sensitizing chemicals rely on animal experimentation. New legislations on the registration and use of chemicals within pharmaceutical and cosmetic industries have stimulated significant research efforts to develop alternative, human cell-based assays for the prediction of sensitization. The aim is to replace animal experiments with in vitro tests displaying a higher predictive power.</p> <p>Results</p> <p>We have developed a novel cell-based assay for the prediction of sensitizing chemicals. By analyzing the transcriptome of the human cell line MUTZ-3 after 24 h stimulation, using 20 different sensitizing chemicals, 20 non-sensitizing chemicals and vehicle controls, we have identified a biomarker signature of 200 genes with potent discriminatory ability. Using a Support Vector Machine for supervised classification, the prediction performance of the assay revealed an area under the ROC curve of 0.98. In addition, categorizing the chemicals according to the LLNA assay, this gene signature could also predict sensitizing potency. The identified markers are involved in biological pathways with immunological relevant functions, which can shed light on the process of human sensitization.</p> <p>Conclusions</p> <p>A gene signature predicting sensitization, using a human cell line in vitro, has been identified. This simple and robust cell-based assay has the potential to completely replace or drastically reduce the utilization of test systems based on experimental animals. Being based on human biology, the assay is proposed to be more accurate for predicting sensitization in humans, than the traditional animal-based tests.</p

    A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part I: model planning

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Different methods have recently been proposed for predicting morbidity in intensive care units (ICU). The aim of the present study was to critically review a number of approaches for developing models capable of estimating the probability of morbidity in ICU after heart surgery. The study is divided into two parts. In this first part, popular models used to estimate the probability of class membership are grouped into distinct categories according to their underlying mathematical principles. Modelling techniques and intrinsic strengths and weaknesses of each model are analysed and discussed from a theoretical point of view, in consideration of clinical applications.</p> <p>Methods</p> <p>Models based on Bayes rule, <it>k-</it>nearest neighbour algorithm, logistic regression, scoring systems and artificial neural networks are investigated. Key issues for model design are described. The mathematical treatment of some aspects of model structure is also included for readers interested in developing models, though a full understanding of mathematical relationships is not necessary if the reader is only interested in perceiving the practical meaning of model assumptions, weaknesses and strengths from a user point of view.</p> <p>Results</p> <p>Scoring systems are very attractive due to their simplicity of use, although this may undermine their predictive capacity. Logistic regression models are trustworthy tools, although they suffer from the principal limitations of most regression procedures. Bayesian models seem to be a good compromise between complexity and predictive performance, but model recalibration is generally necessary. <it>k</it>-nearest neighbour may be a valid non parametric technique, though computational cost and the need for large data storage are major weaknesses of this approach. Artificial neural networks have intrinsic advantages with respect to common statistical models, though the training process may be problematical.</p> <p>Conclusion</p> <p>Knowledge of model assumptions and the theoretical strengths and weaknesses of different approaches are fundamental for designing models for estimating the probability of morbidity after heart surgery. However, a rational choice also requires evaluation and comparison of actual performances of locally-developed competitive models in the clinical scenario to obtain satisfactory agreement between local needs and model response. In the second part of this study the above predictive models will therefore be tested on real data acquired in a specialized ICU.</p
    corecore