50 research outputs found
A simple plug-in bagging ensemble based on threshold-moving for classifying binary and multiclass imbalanced data
Class imbalance presents a major hurdle in the application of classification methods. A commonly taken approach is to learn ensembles of classifiers using rebalanced data. Examples include bootstrap averaging (bagging) combined with either undersampling or oversampling of the minority class examples. However, rebalancing methods entail asymmetric changes to the examples of different classes, which in turn can introduce their own biases. Furthermore, these methods often require specifying the performance measure of interest a priori, i.e., before learning. An alternative is to employ the threshold moving technique, which applies a threshold to the continuous output of a model, offering the possibility to adapt to a performance measure a posteriori, i.e., a plug-in method. Surprisingly, little attention has been paid to this combination of a bagging ensemble and threshold-moving. In this paper, we study this combination and demonstrate its competitiveness. Contrary to the other resampling methods, we preserve the natural class distribution of the data resulting in well-calibrated posterior probabilities. Additionally, we extend the proposed method to handle multiclass data. We validated our method on binary and multiclass benchmark data sets by using both, decision trees and neural networks as base classifiers. We perform analyses that provide insights into the proposed method. Keywords: Imbalanced data; Binary classification; Multiclass classification; Bagging ensembles; Resampling; Posterior calibrationBurroughs Wellcome Fund (Grant 103811AI
Verifiability as a Complement to AI Explainability: A Conceptual Proposal
Recent advances in the field of artificial intelligence (AI) are providing automated and in many cases improved decision-making. However, even very reliable AI systems can go terribly wrong without human users understanding the reason for it. Against this background, there are now widespread calls for models of “explainable AI”. In this paper we point out some inherent problems of this concept and argue that explainability alone is probably not the solution. We therefore propose another approach as a complement, which we call “verifiability”. In essence, it is about designing AI so that it makes available multiple verifiable predictions (given a ground truth) in addition to the one desired prediction that cannot be verified because the ground truth is missing. Such verifiable AI could help to further minimize serious mistakes despite a lack of explainability, help increase their trustworthiness and in turn improve societal acceptance of AI
Julearn: an easy-to-use library for leakage-free evaluation and inspection of ML models
The fast-paced development of machine learning (ML) methods coupled with its
increasing adoption in research poses challenges for researchers without
extensive training in ML. In neuroscience, for example, ML can help understand
brain-behavior relationships, diagnose diseases, and develop biomarkers using
various data sources like magnetic resonance imaging and
electroencephalography. The primary objective of ML is to build models that can
make accurate predictions on unseen data. Researchers aim to prove the
existence of such generalizable models by evaluating performance using
techniques such as cross-validation (CV), which uses systematic subsampling to
estimate the generalization performance. Choosing a CV scheme and evaluating an
ML pipeline can be challenging and, if used improperly, can lead to
overestimated results and incorrect interpretations.
We created julearn, an open-source Python library, that allow researchers to
design and evaluate complex ML pipelines without encountering in common
pitfalls. In this manuscript, we present the rationale behind julearn's design,
its core features, and showcase three examples of previously-published research
projects that can be easily implemented using this novel library. Julearn aims
to simplify the entry into the ML world by providing an easy-to-use environment
with built in guards against some of the most common ML pitfalls. With its
design, unique features and simple interface, it poses as a useful Python-based
library for research projects.Comment: 13 pages, 5 figure
The PhyloPythiaS Web Server for Taxonomic Assignment of Metagenome Sequences
Metagenome sequencing is becoming common and there is an increasing need for easily accessible tools for data analysis. An essential step is the taxonomic classification of sequence fragments. We describe a web server for the taxonomic assignment of metagenome sequences with PhyloPythiaS. PhyloPythiaS is a fast and accurate sequence composition-based classifier that utilizes the hierarchical relationships between clades. Taxonomic assignments with the web server can be made with a generic model, or with sample-specific models that users can specify and create. Several interactive visualization modes and multiple download formats allow quick and convenient analysis and downstream processing of taxonomic assignments. Here, we demonstrate usage of our web server by taxonomic assignment of metagenome samples from an acidophilic biofilm community of an acid mine and of a microbial community from cow rumen
Neurobiological Divergence of the Positive and Negative Schizophrenia Subtypes Identified on a New Factor Structure of Psychopathology Using Non-negative Factorization:An International Machine Learning Study
ObjectiveDisentangling psychopathological heterogeneity in schizophrenia is challenging and previous results remain inconclusive. We employed advanced machine-learning to identify a stable and generalizable factorization of the “Positive and Negative Syndrome Scale (PANSS)”, and used it to identify psychopathological subtypes as well as their neurobiological differentiations.MethodsPANSS data from the Pharmacotherapy Monitoring and Outcome Survey cohort (1545 patients, 586 followed up after 1.35±0.70 years) were used for learning the factor-structure by an orthonormal projective non-negative factorization. An international sample, pooled from nine medical centers across Europe, USA, and Asia (490 patients), was used for validation. Patients were clustered into psychopathological subtypes based on the identified factor-structure, and the neurobiological divergence between the subtypes was assessed by classification analysis on functional MRI connectivity patterns.ResultsA four-factor structure representing negative, positive, affective, and cognitive symptoms was identified as the most stable and generalizable representation of psychopathology. It showed higher internal consistency than the original PANSS subscales and previously proposed factor-models. Based on this representation, the positive-negative dichotomy was confirmed as the (only) robust psychopathological subtypes, and these subtypes were longitudinally stable in about 80% of the repeatedly assessed patients. Finally, the individual subtype could be predicted with good accuracy from functional connectivity profiles of the ventro-medial frontal cortex, temporoparietal junction, and precuneus.ConclusionsMachine-learning applied to multi-site data with cross-validation yielded a factorization generalizable across populations and medical systems. Together with subtyping and the demonstrated ability to predict subtype membership from neuroimaging data, this work further disentangles the heterogeneity in schizophrenia
Recommended from our members
Global burden of 288 causes of death and life expectancy decomposition in 204 countries and territories and 811 subnational locations, 1990–2021: a systematic analysis for the Global Burden of Disease Study 2021
BACKGROUND Regular, detailed reporting on population health by underlying cause of death is fundamental for public health decision making. Cause-specific estimates of mortality and the subsequent effects on life expectancy worldwide are valuable metrics to gauge progress in reducing mortality rates. These estimates are particularly important following large-scale mortality spikes, such as the COVID-19 pandemic. When systematically analysed, mortality rates and life expectancy allow comparisons of the consequences of causes of death globally and over time, providing a nuanced understanding of the effect of these causes on global populations. METHODS The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2021 cause-of-death analysis estimated mortality and years of life lost (YLLs) from 288 causes of death by age-sex-location-year in 204 countries and territories and 811 subnational locations for each year from 1990 until 2021. The analysis used 56 604 data sources, including data from vital registration and verbal autopsy as well as surveys, censuses, surveillance systems, and cancer registries, among others. As with previous GBD rounds, cause-specific death rates for most causes were estimated using the Cause of Death Ensemble model-a modelling tool developed for GBD to assess the out-of-sample predictive validity of different statistical models and covariate permutations and combine those results to produce cause-specific mortality estimates-with alternative strategies adapted to model causes with insufficient data, substantial changes in reporting over the study period, or unusual epidemiology. YLLs were computed as the product of the number of deaths for each cause-age-sex-location-year and the standard life expectancy at each age. As part of the modelling process, uncertainty intervals (UIs) were generated using the 2·5th and 97·5th percentiles from a 1000-draw distribution for each metric. We decomposed life expectancy by cause of death, location, and year to show cause-specific effects on life expectancy from 1990 to 2021. We also used the coefficient of variation and the fraction of population affected by 90% of deaths to highlight concentrations of mortality. Findings are reported in counts and age-standardised rates. Methodological improvements for cause-of-death estimates in GBD 2021 include the expansion of under-5-years age group to include four new age groups, enhanced methods to account for stochastic variation of sparse data, and the inclusion of COVID-19 and other pandemic-related mortality-which includes excess mortality associated with the pandemic, excluding COVID-19, lower respiratory infections, measles, malaria, and pertussis. For this analysis, 199 new country-years of vital registration cause-of-death data, 5 country-years of surveillance data, 21 country-years of verbal autopsy data, and 94 country-years of other data types were added to those used in previous GBD rounds. FINDINGS The leading causes of age-standardised deaths globally were the same in 2019 as they were in 1990; in descending order, these were, ischaemic heart disease, stroke, chronic obstructive pulmonary disease, and lower respiratory infections. In 2021, however, COVID-19 replaced stroke as the second-leading age-standardised cause of death, with 94·0 deaths (95% UI 89·2-100·0) per 100 000 population. The COVID-19 pandemic shifted the rankings of the leading five causes, lowering stroke to the third-leading and chronic obstructive pulmonary disease to the fourth-leading position. In 2021, the highest age-standardised death rates from COVID-19 occurred in sub-Saharan Africa (271·0 deaths [250·1-290·7] per 100 000 population) and Latin America and the Caribbean (195·4 deaths [182·1-211·4] per 100 000 population). The lowest age-standardised death rates from COVID-19 were in the high-income super-region (48·1 deaths [47·4-48·8] per 100 000 population) and southeast Asia, east Asia, and Oceania (23·2 deaths [16·3-37·2] per 100 000 population). Globally, life expectancy steadily improved between 1990 and 2019 for 18 of the 22 investigated causes. Decomposition of global and regional life expectancy showed the positive effect that reductions in deaths from enteric infections, lower respiratory infections, stroke, and neonatal deaths, among others have contributed to improved survival over the study period. However, a net reduction of 1·6 years occurred in global life expectancy between 2019 and 2021, primarily due to increased death rates from COVID-19 and other pandemic-related mortality. Life expectancy was highly variable between super-regions over the study period, with southeast Asia, east Asia, and Oceania gaining 8·3 years (6·7-9·9) overall, while having the smallest reduction in life expectancy due to COVID-19 (0·4 years). The largest reduction in life expectancy due to COVID-19 occurred in Latin America and the Caribbean (3·6 years). Additionally, 53 of the 288 causes of death were highly concentrated in locations with less than 50% of the global population as of 2021, and these causes of death became progressively more concentrated since 1990, when only 44 causes showed this pattern. The concentration phenomenon is discussed heuristically with respect to enteric and lower respiratory infections, malaria, HIV/AIDS, neonatal disorders, tuberculosis, and measles. INTERPRETATION Long-standing gains in life expectancy and reductions in many of the leading causes of death have been disrupted by the COVID-19 pandemic, the adverse effects of which were spread unevenly among populations. Despite the pandemic, there has been continued progress in combatting several notable causes of death, leading to improved global life expectancy over the study period. Each of the seven GBD super-regions showed an overall improvement from 1990 and 2021, obscuring the negative effect in the years of the pandemic. Additionally, our findings regarding regional variation in causes of death driving increases in life expectancy hold clear policy utility. Analyses of shifting mortality trends reveal that several causes, once widespread globally, are now increasingly concentrated geographically. These changes in mortality concentration, alongside further investigation of changing risks, interventions, and relevant policy, present an important opportunity to deepen our understanding of mortality-reduction strategies. Examining patterns in mortality concentration might reveal areas where successful public health interventions have been implemented. Translating these successes to locations where certain causes of death remain entrenched can inform policies that work to improve life expectancy for people everywhere. FUNDING Bill & Melinda Gates Foundation
Confound Removal and Normalization in Practice: A Neuroimaging Based Sex Prediction Case Study
Machine learning (ML) methods are increasingly being used to predict pathologies and biological traits using neuroimaging data. Here controlling for confounds is essential to get unbiased estimates of generalization performance and to identify the features driving predictions. However, a systematic evaluation of the advantages and disadvantages of available alternatives is lacking. This makes it difficult to compare results across studies and to build deployment quality models. Here, we evaluated two commonly used confound removal schemes–whole data confound regression (WDCR) and cross-validated confound regression (CVCR)–to understand their effectiveness and biases induced in generalization performance estimation. Additionally, we study the interaction of the confound removal schemes with Z-score normalization, a common practice in ML modelling. We applied eight combinations of confound removal schemes and normalization (pipelines) to decode sex from resting-state functional MRI (rfMRI) data while controlling for two confounds, brain size and age. We show that both schemes effectively remove linear univariate and multivariate confounding effects resulting in reduced model performance with CVCR providing better generalization estimates, i.e., closer to out-of-sample performance than WDCR. We found no effect of normalizing before or after confound removal. In the presence of dataset and confound shift, four tested confound removal procedures yielded mixed results, raising new questions. We conclude that CVCR is a better method to control for confounding effects in neuroimaging studies. We believe that our in-depth analyses shed light on choices associated with confound removal and hope that it generates more interest in this problem instrumental to numerous applications
Polarization of microbial communities between competitive and cooperative metabolism
Resource competition and metabolic cross-feeding are among the main drivers of microbial community assembly. Yet the degree to which these two conflicting forces are reflected in the composition of natural communities has not been systematically investigated. Here, we use genome-scale metabolic modelling to assess the potential for resource competition and metabolic cooperation in large co-occurring groups (up to 40 members) across thousands of habitats. Our analysis reveals two distinct community types, which are clustered at opposite ends of a spectrum in a trade-off between competition and cooperation. At one end are highly cooperative communities, characterized by smaller genomes and multiple auxotrophies. At the other end are highly competitive communities, which feature larger genomes and overlapping nutritional requirements, and harbour more genes related to antimicrobial activity. The latter are mainly present in soils, whereas the former are found in both free-living and host-associated habitats. Community-scale flux simulations show that, whereas competitive communities can better resist species invasion but not nutrient shift, cooperative communities are susceptible to species invasion but resilient to nutrient change. We also show, by analysing an additional data set, that colonization by probiotic species is positively associated with the presence of cooperative species in the recipient microbiome. Together, our results highlight the bifurcation between competitive and cooperative metabolism in the assembly of natural communities and its implications for community modulation
Evolving complex yet interpretable representations: application to Alzheimer’s diagnosis and prognosis
With increasing accuracy and availability of moredata, the potential of using machine learning (ML) methods inmedical and clinical applications has gained considerableinterest. However, the main hurdle in translational use of MLmethods is the lack of explainability, especially when non-linearmethods are used. Explainable (i.e. human-interpretable)methods can provide insights into disease mechanisms but canequally importantly promote clinician-patient trust, in turnhelping wider social acceptance of ML methods. Here, weempirically test a method to engineer complex, yet interpretable,representations of base features via evolution of context-freegrammar (CFG). We show that together with a simple MLalgorithm evolved features provide higher accuracy on severalbenchmark datasets and then apply it to a real word problem ofdiagnosing Alzheimer’s disease (AD) based on magneticresonance imaging (MRI) data. We further demonstrate highperformance on a hold-out dataset for the prognosis of AD.Keywords — grammar evolution, feature representation,interpretability, Alzheimer’s disease, machine learnin