4,639 research outputs found

    Development of an international standard set of outcome measures for patients with atrial fibrillation: a report of the International Consortium for Health Outcomes Measurement (ICHOM) atrial fibrillation working group.

    Get PDF
    AIMS: As health systems around the world increasingly look to measure and improve the value of care that they provide to patients, being able to measure the outcomes that matter most to patients is vital. To support the shift towards value-based health care in atrial fibrillation (AF), the International Consortium for Health Outcomes Measurement (ICHOM) assembled an international Working Group (WG) of 30 volunteers, including health professionals and patient representatives to develop a standardized minimum set of outcomes for benchmarking care delivery in clinical settings. METHODS AND RESULTS: Using an online-modified Delphi process, outcomes important to patients and health professionals were selected and categorized into (i) long-term consequences of disease outcomes, (ii) complications of treatment outcomes, and (iii) patient-reported outcomes. The WG identified demographic and clinical variables for use as case-mix risk adjusters. These included baseline demographics, comorbidities, cognitive function, date of diagnosis, disease duration, medications prescribed and AF procedures, as well as smoking, body mass index (BMI), alcohol intake, and physical activity. Where appropriate, and for ease of implementation, standardization of outcomes and case-mix variables was achieved using ICD codes. The standard set underwent an open review process in which over 80% of patients surveyed agreed with the outcomes captured by the standard set. CONCLUSION: Implementation of these consensus recommendations could help institutions to monitor, compare and improve the quality and delivery of chronic AF care. Their consistent definition and collection, using ICD codes where applicable, could also broaden the implementation of more patient-centric clinical outcomes research in AF

    Evolving Ensemble Fuzzy Classifier

    Full text link
    The concept of ensemble learning offers a promising avenue in learning from data streams under complex environments because it addresses the bias and variance dilemma better than its single model counterpart and features a reconfigurable structure, which is well suited to the given context. While various extensions of ensemble learning for mining non-stationary data streams can be found in the literature, most of them are crafted under a static base classifier and revisits preceding samples in the sliding window for a retraining step. This feature causes computationally prohibitive complexity and is not flexible enough to cope with rapidly changing environments. Their complexities are often demanding because it involves a large collection of offline classifiers due to the absence of structural complexities reduction mechanisms and lack of an online feature selection mechanism. A novel evolving ensemble classifier, namely Parsimonious Ensemble pENsemble, is proposed in this paper. pENsemble differs from existing architectures in the fact that it is built upon an evolving classifier from data streams, termed Parsimonious Classifier pClass. pENsemble is equipped by an ensemble pruning mechanism, which estimates a localized generalization error of a base classifier. A dynamic online feature selection scenario is integrated into the pENsemble. This method allows for dynamic selection and deselection of input features on the fly. pENsemble adopts a dynamic ensemble structure to output a final classification decision where it features a novel drift detection scenario to grow the ensemble structure. The efficacy of the pENsemble has been numerically demonstrated through rigorous numerical studies with dynamic and evolving data streams where it delivers the most encouraging performance in attaining a tradeoff between accuracy and complexity.Comment: this paper has been published by IEEE Transactions on Fuzzy System

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues

    A Voting Technique Of Multilayer Perceptron Ensemble For Classification Application

    Get PDF
    MLP is a model of artificial neural network, which is simple yet successfully applied in various applications. The instability of MLP performance where small changes in training parameter could produce different models that inhibiting attainment of high accuracy in classification applications. In this research, an integrated system of Multi-Layer Perceptron Ensemble (MLPE) consisting of an MLPE and a new voting algorithm has been developed to increase classification accuracy and reduce the number of reject class cases. MLPE is produced from singular MLPs that are diverse in term of training algorithm and their initial weights. Three training algorithms used are Levenberg-Marquardt (LM), Resilient Backpropagation (RP) and Bayesian Regularization (BR). In order to choose the final output of MLPE, a new voting algorithm named Trust-Sum Voting (TSV) is proposed. The effectiveness of MLPE with TSV (MLPE-TSV) has been tested on four classification case studies which are Electrical Capacitance Tomography (ECT), Landsat Satellite Image (LSI), German Credit (GC) and Pima Indian Diabetes (PID). The performance of MLPE-TSV has been compared with the performance of MLPE which employs existing voting algorithms which are Majority Voting (MLPE-MV) and Trust Voting (MLPE-TV). The obtained results have shown that the proposed MLPE-TSV is capable of increasing the accuracy of classification as compared to singular MLPs, MLPE-MV and MLPE-TV. MLPE-TSV has also managed to reduce the number of cases in reject class

    Intelligent data analysis to interpret major risk factors for diabetic patients with and without ischemic stroke in a small population

    Get PDF
    This study proposes an intelligent data analysis approach to investigate and interpret the distinctive factors of diabetes mellitus patients with and without ischemic (non-embolic type) stroke in a small population. The database consists of a total of 16 features collected from 44 diabetic patients. Features include age, gender, duration of diabetes, cholesterol, high density lipoprotein, triglyceride levels, neuropathy, nephropathy, retinopathy, peripheral vascular disease, myocardial infarction rate, glucose level, medication and blood pressure. Metric and non-metric features are distinguished. First, the mean and covariance of the data are estimated and the correlated components are observed. Second, major components are extracted by principal component analysis. Finally, as common examples of local and global classification approach, a k-nearest neighbor and a high-degree polynomial classifier such as multilayer perceptron are employed for classification with all the components and major components case. Macrovascular changes emerged as the principal distinctive factors of ischemic-stroke in diabetes mellitus. Microvascular changes were generally ineffective discriminators. Recommendations were made according to the rules of evidence-based medicine. Briefly, this case study, based on a small population, supports theories of stroke in diabetes mellitus patients and also concludes that the use of intelligent data analysis improves personalized preventive intervention

    An Ensemble Method to Automatically Grade Diabetic Retinopathy with Optical Coherence Tomography Angiography Images

    Full text link
    Diabetic retinopathy (DR) is a complication of diabetes, and one of the major causes of vision impairment in the global population. As the early-stage manifestation of DR is usually very mild and hard to detect, an accurate diagnosis via eye-screening is clinically important to prevent vision loss at later stages. In this work, we propose an ensemble method to automatically grade DR using ultra-wide optical coherence tomography angiography (UW-OCTA) images available from Diabetic Retinopathy Analysis Challenge (DRAC) 2022. First, we adopt the state-of-the-art classification networks, i.e., ResNet, DenseNet, EfficientNet, and VGG, and train them to grade UW-OCTA images with different splits of the available dataset. Ultimately, we obtain 25 models, of which, the top 16 models are selected and ensembled to generate the final predictions. During the training process, we also investigate the multi-task learning strategy, and add an auxiliary classification task, the Image Quality Assessment, to improve the model performance. Our final ensemble model achieved a quadratic weighted kappa (QWK) of 0.9346 and an Area Under Curve (AUC) of 0.9766 on the internal testing dataset, and the QWK of 0.839 and the AUC of 0.8978 on the DRAC challenge testing dataset.Comment: 13 pages, 6 figures, 5 tables. To appear in Diabetic Retinopathy Analysis Challenge (DRAC), Bin Sheng et al., MICCAI 2022 Challenge, Lecture Notes in Computer Science, Springe
    corecore