3,614 research outputs found

    Prediction of dyslipidemia using gene mutations, family history of diseases and anthropometric indicators in children and adolescents: The CASPIAN-III study

    Get PDF
    Dyslipidemia, the disorder of lipoprotein metabolism resulting in high lipid profile, is an important modifiable risk factor for coronary heart diseases. It is associated with more than four million worldwide deaths per year. Half of the children with dyslipidemia have hyperlipidemia during adulthood, and its prediction and screening are thus critical. We designed a new dyslipidemia diagnosis system. The sample size of 725 subjects (age 14.66¿±¿2.61 years; 48% male; dyslipidemia prevalence of 42%) was selected by multistage random cluster sampling in Iran. Single nucleotide polymorphisms (rs1801177, rs708272, rs320, rs328, rs2066718, rs2230808, rs5880, rs5128, rs2893157, rs662799, and Apolipoprotein-E2/E3/E4), and anthropometric, life-style attributes, and family history of diseases were analyzed. A framework for classifying mixed-type data in imbalanced datasets was proposed. It included internal feature mapping and selection, re-sampling, optimized group method of data handling using convex and stochastic optimizations, a new cost function for imbalanced data and an internal validation. Its performance was assessed using hold-out and 4-foldcross-validation. Four other classifiers namely as supported vector machines, decision tree, and multilayer perceptron neural network and multiple logistic regression were also used. The average sensitivity, specificity, precision and accuracy of the proposed system were 93%, 94%, 94% and 92%, respectively in cross validation. It significantly outperformed the other classifiers and also showed excellent agreement and high correlation with the gold standard. A non-invasive economical version of the algorithm was also implemented suitable for low- and middle-income countries. It is thus a promising new tool for the prediction of dyslipidemiaPeer ReviewedPostprint (published version

    7th Baltic Atherosclerosis Society Congress

    Get PDF
    Eesti Arst 2018;97(Lisa1):1–48 &nbsp

    7th Baltic Atherosclerosis Society Congress

    Get PDF
    Eesti Arst 2018;97(Lisa1):1–48 &nbsp

    Outcome prediction in intensive care with special reference to cardiac surgery

    Get PDF
    The development, use, and understanding of severity of illness scoring systems has advanced rapidly in the last decade; their weaknesses and limitations have also become apparent. This work follows some of this development and explores some of these aspects. It was undertaken in three stages and in two countries. The first study investigated three severity of illness scoring systems in a general Intensive Care Unit (ICU) in Cape Town, namely the Acute Physiology and Chronic Health Evaluation (APACHE II) score, the Therapeutic Intervention Scoring System (TISS), and a locally developed organ failure score. All of these showed a good relationship with mortality, with the organ failure score the best predictor of outcome. The TISS score was felt to be more likely to be representative of intensiveness of medical and nursing management than severity of illness. The APACHE II score was already becoming widely used world-wide and although it performed less well in some diagnostic categories (for example Adult Respiratory Distress Syndrome) than had been hoped, it clearly warranted further investigation. Some of the diagnosis-specific problems were eliminated in the next study which concentrated on the application of the APACHE II score in a cardiothoracic surgical ICU in London. Although group predictive ability was statistically impressive, the predictive ability of APACHE II in the individual patient was limited as only very high APACHE II scores confidently predicted death and then only in a small number of patients. However, there were no deaths associated with an APACHE II score of less than 5 and the mortality was less than 1 % when the APACHE II score was less than 10. Finally, having recognised the inadequacies in mortality prediction of the APACHE II score in this scenario, a study was undertaken to evaluate a novel concept: a combination of preoperative, intraoperative, and postoperative (including APACHE II and III) variables in cardiac surgery patients admitted to the same ICU. The aim was to develop a more precise method of predicting length of stay, incidence of complications, and ICU and hospital outcome for these patients. There were 1008 patients entered into the study. There was a statistically significant relationship between increasing Parsonnet (a cardiac surgery risk prediction score), APACHE II, and APACHE III scores and mortality. By forward stepwise logistic regression a model was developed for the probability of hospital death. This model included bypass time, need for inotropes, mean arterial pressure, urea, and Glasgow Coma Scale. Predictive performance was evaluated by calculating the area under the receiver operating characteristic (ROC) curve. The derived model had an area under the ROC curve 0.87, while the Parsonnet score had an area of 0.82 and the APACHE II risk of dying 0.84. It was concluded that a combination of intraoperative and postoperative variables can improve predictive ability

    Data mart based research in heart surgery

    Get PDF
    Arnrich B. Data mart based research in heart surgery. Bielefeld (Germany): Bielefeld University; 2006.The proposed data mart based information system has proven to be useful and effective in the particular application domain of clinical research in heart surgery. In contrast to common data warehouse systems who are focused primarily on administrative, managerial, and executive decision making, the primary objective of the designed and implemented data mart was to provide an ongoing, consolidated and stable research basis. Beside detail-oriented patient data also aggregated data are incorporated in order to fulfill multiple purposes. Due to the chosen concept, this technique integrates the current and historical data from all relevant data sources without imposing any considerable operational or liability contract risk for the existing hospital information systems (HIS). By this means the possible resistance of involved persons in charge can be minimized and the project specific goals effectively met. The challenges of isolated data sources, securing a high data quality, data with partial redundancy and consistency, valuable legacy data in special file formats, and privacy protection regulations are met with the proposed data mart architecture. The applicability was demonstrated in several fields, including (i) to permit easy comprehensive medical research, (ii) to assess preoperative risks of adverse surgical outcomes, (iii) to get insights into historical performance changes, (iv) to monitor surgical results, (v) to improve risk estimation, and (vi) to generate new knowledge from observational studies. The data mart approach allows to turn redundant data from the electronically available hospital data sources into valuable information. On the one hand, redundancies are used to detect inconsistencies within and across HIS. On the other hand, redundancies are used to derive attributes from several data sources which originally did not contain the desired semantic meaning. Appropriate verification tools help to inspect the extraction and transformation processes in order to ensure a high data quality. Based on the verification data stored during data mart assembly, various aspects on the basis of an individual case, a group, or a specific rule can be inspected. Invalid values or inconsistencies must be corrected in the primary source data bases by the health professionals. Due to all modifications are automatically transferred to the data mart system in a subsequent cycle, a consolidated and stable research data base is achieved throughout the system in a persistent manner. In the past, performing comprehensive observational studies at the Heart Institute Lahr had been extremely time consuming and therefore limited. Several attempts had already been conducted to extract and combine data from the electronically available data sources. Dependent on the desired scientific task, the processes to extract and connect the data were often rebuilt and modified. Consequently the semantics and the definitions of the research data changed from one study to the other. Additionally, it was very difficult to maintain an overview of all data variants and derived research data sets. With the implementation of the presented data mart system the most time and effort consuming process with conducting successful observational studies could be replaced and the research basis remains stable and leads to reliable results

    Non-communicable Diseases, Big Data and Artificial Intelligence

    Get PDF
    This reprint includes 15 articles in the field of non-communicable Diseases, big data, and artificial intelligence, overviewing the most recent advances in the field of AI and their application potential in 3P medicine

    Big data analytics for preventive medicine

    Get PDF
    © 2019, Springer-Verlag London Ltd., part of Springer Nature. Medical data is one of the most rewarding and yet most complicated data to analyze. How can healthcare providers use modern data analytics tools and technologies to analyze and create value from complex data? Data analytics, with its promise to efficiently discover valuable pattern by analyzing large amount of unstructured, heterogeneous, non-standard and incomplete healthcare data. It does not only forecast but also helps in decision making and is increasingly noticed as breakthrough in ongoing advancement with the goal is to improve the quality of patient care and reduces the healthcare cost. The aim of this study is to provide a comprehensive and structured overview of extensive research on the advancement of data analytics methods for disease prevention. This review first introduces disease prevention and its challenges followed by traditional prevention methodologies. We summarize state-of-the-art data analytics algorithms used for classification of disease, clustering (unusually high incidence of a particular disease), anomalies detection (detection of disease) and association as well as their respective advantages, drawbacks and guidelines for selection of specific model followed by discussion on recent development and successful application of disease prevention methods. The article concludes with open research challenges and recommendations

    Risk stratification and outcome assessment in cardiac surgery and transcatheter interventions

    Get PDF

    Risk stratification and outcome assessment in cardiac surgery and transcatheter interventions

    Get PDF
    corecore