32,431 research outputs found

    Diabetes Prediction Using Artificial Neural Network

    Get PDF
    Diabetes is one of the most common diseases worldwide where a cure is not found for it yet. Annually it cost a lot of money to care for people with diabetes. Thus the most important issue is the prediction to be very accurate and to use a reliable method for that. One of these methods is using artificial intelligence systems and in particular is the use of Artificial Neural Networks (ANN). So in this paper, we used artificial neural networks to predict whether a person is diabetic or not. The criterion was to minimize the error function in neural network training using a neural network model. After training the ANN model, the average error function of the neural network was equal to 0.01 and the accuracy of the prediction of whether a person is diabetics or not was 87.3

    ARTMAP-IC and Medical Diagnosis: Instance Counting and Inconsistent Cases

    Full text link
    For complex database prediction problems such as medical diagnosis, the ARTMAP-IC neural network adds distributed prediction and category instance counting to the basic fuzzy ARTMAP system. For the ARTMAP match tracking algorithm, which controls search following a predictive error, a new version facilitates prediction with sparse or inconsistent data. Compared to the original match tracking algorithm (MT+), the new algorithm (MT-) better approximates the real-time network differential equations and further compresses memory without loss of performance. Simulations examine predictive accuracy on four medical databases: Pima Indian diabetes, breast cancer, heart disease, and gall bladder removal. ARTMAP-IC results arc equal to or better than those of logistic regression, K nearest neighbor (KNN), the ADAP perceptron, multisurface pattern separation, CLASSIT, instance-based (IBL), and C4. ARTMAP dynamics are fast, stable, and scalable. A voting strategy improves prediction by training the system several times on different orderings of an input set. Voting, instance counting, and distributed representations combine to form confidence estimates for competing predictions.National Science Foundation (IRI 94-01659); Office of Naval Research (N00014-95-J-0409, N00014-95-0657

    Use of International Classification of Diseases, Ninth Revision Codes for Obesity: Trends in the United States from an Electronic Health Record-Derived Database.

    Get PDF
    Obesity is a potentially modifiable risk factor for many diseases, and a better understanding of its impact on health care utilization, costs, and medical outcomes is needed. The ability to accurately evaluate obesity outcomes depends on a correct identification of the population with obesity. The primary objective of this study was to determine the prevalence and accuracy of International Classification of Diseases, Ninth Revision (ICD-9) coding for overweight and obesity within a US primary care electronic health record (EHR) database compared against actual body mass index (BMI) values from recorded clinical patient data; characteristics of patients with obesity who did or did not receive ICD-9 codes for overweight/obesity also were evaluated. The study sample included 5,512,285 patients in the database with any BMI value recorded between January 1, 2014, and June 30, 2014. Based on BMI, 74.6% of patients were categorized as being overweight or obese, but only 15.1% of patients had relevant ICD-9 codes. ICD-9 coding prevalence increased with increasing BMI category. Among patients with obesity (BMI ≥30 kg/m2), those coded for obesity were younger, more often female, and had a greater comorbidity burden than those not coded; hypertension, dyslipidemia, type 2 diabetes mellitus, and gastroesophageal reflux disease were the most common comorbidities. KEY FINDINGS: US outpatients with overweight or obesity are not being reliably coded, making ICD-9 codes undependable sources for determining obesity prevalence and outcomes. BMI data available within EHR databases offer a more accurate and objective means of classifying overweight/obese status

    Computer-assisted versus oral-and-written dietary history taking for diabetes mellitus

    Get PDF
    Background: Diabetes is a chronic illness characterised by insulin resistance or deficiency, resulting in elevated glycosylated haemoglobin A1c (HbA1c) levels. Diet and adherence to dietary advice is associated with lower HbA1c levels and control of disease. Dietary history may be an effective clinical tool for diabetes management and has traditionally been taken by oral-and-written methods, although it can also be collected using computer-assisted history taking systems (CAHTS). Although CAHTS were first described in the 1960s, there remains uncertainty about the impact of these methods on dietary history collection, clinical care and patient outcomes such as quality of life. Objectives: To assess the effects of computer-assisted versus oral-and-written dietary history taking on patient outcomes for diabetes mellitus. Search methods: We searched The Cochrane Library (issue 6, 2011), MEDLINE (January 1985 to June 2011), EMBASE (January 1980 to June 2011) and CINAHL (January 1981 to June 2011). Reference lists of obtained articles were also pursued further and no limits were imposed on languages and publication status. Selection criteria: Randomised controlled trials of computer-assisted versus oral-and-written history taking in patients with diabetes mellitus. Data collection and analysis: Two authors independently scanned the title and abstract of retrieved articles. Potentially relevant articles were investigated as full text. Studies that met the inclusion criteria were abstracted for relevant population and intervention characteristics with any disagreements resolved by discussion, or by a third party. Risk of bias was similarly assessed independently. Main results: Of the 2991 studies retrieved, only one study with 38 study participants compared the two methods of history taking over a total of eight weeks. The authors found that as patients became increasingly familiar with using CAHTS, the correlation between patients' food records and computer assessments improved. Reported fat intake decreased in the control group and increased when queried by the computer. The effect of the intervention on the management of diabetes mellitus and blood glucose levels was not reported. Risk of bias was considered moderate for this study. Authors' conclusions: Based on one small study judged to be of moderate risk of bias, we tentatively conclude that CAHTS may be well received by study participants and potentially offer time saving in practice. However, more robust studies with larger sample sizes are needed to confirm these. We cannot draw on any conclusions in relation to any other clinical outcomes at this stage

    Risk models and scores for type 2 diabetes: Systematic review

    Get PDF
    This article is published under a Creative Commons Attribution Non Commercial (CC BY-NC 3.0) licence that allows reuse subject only to the use being non-commercial and to the article being fully attributed (http://creativecommons.org/licenses/by-nc/3.0).Objective - To evaluate current risk models and scores for type 2 diabetes and inform selection and implementation of these in practice. Design - Systematic review using standard (quantitative) and realist (mainly qualitative) methodology. Inclusion - criteria Papers in any language describing the development or external validation, or both, of models and scores to predict the risk of an adult developing type 2 diabetes. Data sources - Medline, PreMedline, Embase, and Cochrane databases were searched. Included studies were citation tracked in Google Scholar to identify follow-on studies of usability or impact. Data extraction - Data were extracted on statistical properties of models, details of internal or external validation, and use of risk scores beyond the studies that developed them. Quantitative data were tabulated to compare model components and statistical properties. Qualitative data were analysed thematically to identify mechanisms by which use of the risk model or score might improve patient outcomes. Results - 8864 titles were scanned, 115 full text papers considered, and 43 papers included in the final sample. These described the prospective development or validation, or both, of 145 risk prediction models and scores, 94 of which were studied in detail here. They had been tested on 6.88 million participants followed for up to 28 years. Heterogeneity of primary studies precluded meta-analysis. Some but not all risk models or scores had robust statistical properties (for example, good discrimination and calibration) and had been externally validated on a different population. Genetic markers added nothing to models over clinical and sociodemographic factors. Most authors described their score as “simple” or “easily implemented,” although few were specific about the intended users and under what circumstances. Ten mechanisms were identified by which measuring diabetes risk might improve outcomes. Follow-on studies that applied a risk score as part of an intervention aimed at reducing actual risk in people were sparse. Conclusion - Much work has been done to develop diabetes risk models and scores, but most are rarely used because they require tests not routinely available or they were developed without a specific user or clear use in mind. Encouragingly, recent research has begun to tackle usability and the impact of diabetes risk scores. Two promising areas for further research are interventions that prompt lay people to check their own diabetes risk and use of risk scores on population datasets to identify high risk “hotspots” for targeted public health interventions.Tower Hamlets, Newham, and City and Hackney primary care trusts and National Institute of Health Research
    corecore