37,705 research outputs found

    Modelling Grocery Retail Topic Distributions: Evaluation, Interpretability and Stability

    Get PDF
    Understanding the shopping motivations behind market baskets has high commercial value in the grocery retail industry. Analyzing shopping transactions demands techniques that can cope with the volume and dimensionality of grocery transactional data while keeping interpretable outcomes. Latent Dirichlet Allocation (LDA) provides a suitable framework to process grocery transactions and to discover a broad representation of customers' shopping motivations. However, summarizing the posterior distribution of an LDA model is challenging, while individual LDA draws may not be coherent and cannot capture topic uncertainty. Moreover, the evaluation of LDA models is dominated by model-fit measures which may not adequately capture the qualitative aspects such as interpretability and stability of topics. In this paper, we introduce clustering methodology that post-processes posterior LDA draws to summarise the entire posterior distribution and identify semantic modes represented as recurrent topics. Our approach is an alternative to standard label-switching techniques and provides a single posterior summary set of topics, as well as associated measures of uncertainty. Furthermore, we establish a more holistic definition for model evaluation, which assesses topic models based not only on their likelihood but also on their coherence, distinctiveness and stability. By means of a survey, we set thresholds for the interpretation of topic coherence and topic similarity in the domain of grocery retail data. We demonstrate that the selection of recurrent topics through our clustering methodology not only improves model likelihood but also outperforms the qualitative aspects of LDA such as interpretability and stability. We illustrate our methods on an example from a large UK supermarket chain.Comment: 20 pages, 9 figure

    Interpretability of radiomics models is improved when using feature group selection strategies for predicting molecular and clinical targets in clear-cell renal cell carcinoma: insights from the TRACERx Renal study

    Get PDF
    BACKGROUND: The aim of this work is to evaluate the performance of radiomics predictions for a range of molecular, genomic and clinical targets in patients with clear cell renal cell carcinoma (ccRCC) and demonstrate the impact of novel feature selection strategies and sub-segmentations on model interpretability. METHODS: Contrast-enhanced CT scans from the first 101 patients recruited to the TRACERx Renal Cancer study (NCT03226886) were used to derive radiomics classification models to predict 20 molecular, histopathology and clinical target variables. Manual 3D segmentation was used in conjunction with automatic sub-segmentation to generate radiomics features from the core, rim, high and low enhancing sub-regions, and the whole tumour. Comparisons were made between two classification model pipelines: a Conventional pipeline reflecting common radiomics practice, and a Proposed pipeline including two novel feature selection steps designed to improve model interpretability. For both pipelines nested cross-validation was used to estimate prediction performance and tune model hyper-parameters, and permutation testing was used to evaluate the statistical significance of the estimated performance measures. Further model robustness assessments were conducted by evaluating model variability across the cross-validation folds. RESULTS: Classification performance was significant (p  0.1. Five of these targets (necrosis on histology, presence of renal vein invasion, overall histological stage, linear evolutionary subtype and loss of 9p21.3 somatic alteration marker) had AUROC > 0.8. Models derived using the Proposed pipeline contained fewer feature groups than the Conventional pipeline, leading to more straightforward model interpretations without loss of performance. Sub-segmentations lead to improved performance and/or improved interpretability when predicting the presence of sarcomatoid differentiation and tumour stage. CONCLUSIONS: Use of the Proposed pipeline, which includes the novel feature selection methods, leads to more interpretable models without compromising prediction performance. TRIAL REGISTRATION: NCT03226886 (TRACERx Renal

    Multi-Channel Stochastic Variational Inference for the Joint Analysis of Heterogeneous Biomedical Data in Alzheimer's Disease

    Get PDF
    The joint analysis of biomedical data in Alzheimer's Disease (AD) is important for better clinical diagnosis and to understand the relationship between biomarkers. However, jointly accounting for heterogeneous measures poses important challenges related to the modeling of the variability and the interpretability of the results. These issues are here addressed by proposing a novel multi-channel stochastic generative model. We assume that a latent variable generates the data observed through different channels (e.g., clinical scores, imaging, ...) and describe an efficient way to estimate jointly the distribution of both latent variable and data generative process. Experiments on synthetic data show that the multi-channel formulation allows superior data reconstruction as opposed to the single channel one. Moreover, the derived lower bound of the model evidence represents a promising model selection criterion. Experiments on AD data show that the model parameters can be used for unsupervised patient stratification and for the joint interpretation of the heterogeneous observations. Because of its general and flexible formulation, we believe that the proposed method can find important applications as a general data fusion technique.Comment: accepted for presentation at MLCN 2018 workshop, in Conjunction with MICCAI 2018, September 20, Granada, Spai

    Evaluating Topic Modeling Interpretability Using Topic Labeled Gold-standard Sets

    Get PDF
    The paucity of rigorous evaluation measures undermines topic modeling results’ validity and trustworthiness. Accordingly, we propose a method that researchers can use to select models when they assess topics’ human interpretability. We show how they can evaluate different topic models using gold-standard sets that humans label. Our approach ensures that the topics extracted algorithmically from an entire corpus concur with the themes humans would have identified in the same documents. By doing so, we combine human coding’s advantages for topic interpretability with algorithmic topic Modeling’s analytical efficiency and scalability. We demonstrate that one can rigorously identify optimal model parametrizations for maximum interpretability and to rigorously justify model selection. We also contribute three open access gold-standard sets in the hospitality context and make them available so other researchers can use them to benchmark their models or validate their results. Finally, we showcase a methodology for designing and developing gold-standard sets for validating topic models, which researchers interested in developing gold-standard sets in domains and contexts appropriate for their research can use

    Comparative Study of Variable Selection Methods for Genetic Data

    Get PDF
    Association studies for genetic data are essential to understand the genetic basis of complex traits. However, analyzing such high-dimensional data needs suitable feature selection methods. For this reason, we compare three methods, Lasso Regression, Bayesian Lasso Regression, and Ridge Regression combined with significance tests, to identify the most effective method for modeling quantitative trait expression in genetic data. All methods are applied to both simulated and real genetic data and evaluated in terms of various measures of model performance, such as the mean absolute error, the mean squared error, the Akaike information criterion, and the Bayesian information criterion. The results show that all methods perform better than the ordinary least squares model on the prediction of future data. Moreover, the Lasso Regression outperforms all methods in terms of execution time and simplicity of the model, which therefore leads to better interpretability and makes it the best choice for association studies. Overall this thesis provides valuable insights into the strength and limitations of existing feature selection methods for modeling quantitative trait expression and highlights its importance in association studies for genetic data

    On The Stability of Interpretable Models

    Full text link
    Interpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent. When considered in isolation, a decision tree, a set of classification rules, or a linear model, are widely recognized as human-interpretable. However, such models are generated as part of a larger analytical process. Bias in data collection and preparation, or in model's construction may severely affect the accountability of the design process. We conduct an experimental study of the stability of interpretable models with respect to feature selection, instance selection, and model selection. Our conclusions should raise awareness and attention of the scientific community on the need of a stability impact assessment of interpretable models

    A Performance-Explainability-Fairness Framework For Benchmarking ML Models

    Get PDF
    Machine learning (ML) models have achieved remarkable success in various applications; however, ensuring their robustness and fairness remains a critical challenge. In this research, we present a comprehensive framework designed to evaluate and benchmark ML models through the lenses of performance, explainability, and fairness. This framework addresses the increasing need for a holistic assessment of ML models, considering not only their predictive power but also their interpretability and equitable deployment. The proposed framework leverages a multi-faceted evaluation approach, integrating performance metrics with explainability and fairness assessments. Performance evaluation incorporates standard measures such as accuracy, precision, and recall, but extends to overall balanced error rate, overall area under the receiver operating characteristic (ROC) curve (AUC), to capture model behavior across different performance aspects. Explainability assessment employs state-of-the-art techniques to quantify the interpretability of model decisions, ensuring that model behavior can be understood and trusted by stakeholders. The fairness evaluation examines model predictions in terms of demographic parity, equalized odds, thereby addressing concerns of bias and discrimination in the deployment of ML systems. To demonstrate the practical utility of the framework, we apply it to a diverse set of ML algorithms across various functional domains, including finance, criminology, education, and healthcare prediction. The results showcase the importance of a balanced evaluation approach, revealing trade-offs between performance, explainability, and fairness that can inform model selection and deployment decisions. Furthermore, we provide insights into the analysis of tradeoffs in selecting the appropriate model for use cases where performance, interpretability and fairness are important. In summary, the Performance-Explainability-Fairness Framework offers a unified methodology for evaluating and benchmarking ML models, enabling practitioners and researchers to make informed decisions about model suitability and ensuring responsible and equitable AI deployment. We believe that this framework represents a crucial step towards building trustworthy and accountable ML systems in an era where AI plays an increasingly prominent role in decision-making processes

    Mutual information for the selection of relevant variables in spectrometric nonlinear modelling

    Get PDF
    Data from spectrophotometers form vectors of a large number of exploitable variables. Building quantitative models using these variables most often requires using a smaller set of variables than the initial one. Indeed, a too large number of input variables to a model results in a too large number of parameters, leading to overfitting and poor generalization abilities. In this paper, we suggest the use of the mutual information measure to select variables from the initial set. The mutual information measures the information content in input variables with respect to the model output, without making any assumption on the model that will be used; it is thus suitable for nonlinear modelling. In addition, it leads to the selection of variables among the initial set, and not to linear or nonlinear combinations of them. Without decreasing the model performances compared to other variable projection methods, it allows therefore a greater interpretability of the results

    Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability

    Full text link
    Post-hoc model-agnostic interpretation methods such as partial dependence plots can be employed to interpret complex machine learning models. While these interpretation methods can be applied regardless of model complexity, they can produce misleading and verbose results if the model is too complex, especially w.r.t. feature interactions. To quantify the complexity of arbitrary machine learning models, we propose model-agnostic complexity measures based on functional decomposition: number of features used, interaction strength and main effect complexity. We show that post-hoc interpretation of models that minimize the three measures is more reliable and compact. Furthermore, we demonstrate the application of these measures in a multi-objective optimization approach which simultaneously minimizes loss and complexity
    • …
    corecore