7 research outputs found

    Duraznos para industria en Argentina : prospectiva al 2030

    Get PDF
    Fil: Viera, Manuel. Universidad Nacional de Cuyo. Secretaría de Extensión y Vinculación. Área de Vinculación.Fil: Ramet, Eduardo. Universidad Nacional de Cuyo. Secretaría de Extensión y Vinculación. Área de Vinculación.Fil: Ojer, Miguel. Universidad Nacional de Cuyo. Facultad de Ciencias Agrarias.Fil: Vitale, Javier. Instituto Nacional de Tecnología Agropecuaria (Argentina). Centro Regional Mendoza-San Juan..Fil: Pescarmona, Bruno. Federación Plan Estratégico del Durazno para Industria (Argentina).Fil: Viard, José Luis. Federación Plan Estratégico del Durazno para Industria (Argentina).Fil: Giacinti Battistuzzi, Miguel Angel

    Error Variance, Fairness, and the Curse on Minorities

    Full text link
    Machine learning systems can make more errors for certain populations and not others, and thus create discriminations. To assess such fairness issue, errors are typically compared across populations. We argue that we also need to account for the variability of errors in practice, as the errors measured in test data may not be exactly the same in real-life data (called target data). We first introduce statistical methods for estimating random error variance in machine learning problems. The methods estimate how often errors would exceed certain magnitudes, and how often the errors of a population would exceed that of another (e.g., by more than a certain range). The methods are based on well-established sampling theory, and the recently introduced Sample-to-Sample estimation. The latter shows that small target samples yield high error variance, even if the test sample is very large. We demonstrate that, in practice, minorities are bound to bear higher variance, thus amplified error and bias. This can occur even if the test and training sets are accurate, representative, and extremely large. We call this statistical phenomenon the curse on minorities, and we show examples of its impact with basic classification and regression problems. Finally, we outline potential approaches to protect minorities from such curse, and to develop variance-aware fairness assessments

    This Looks Like That, Because.. Explaining Prototypes for Interpretable Image Recognition

    Full text link
    Image recognition with prototypes is considered an interpretable alternative for black box deep learning models. Classification depends on the extent to which a test image “looks like” a prototype. However, perceptual similarity for humans can be different from the similarity learned by the classification model. Hence, only visualising prototypes can be insufficient for a user to understand what a prototype exactly represents, and why the model considers a prototype and an image to be similar. We address this ambiguity and argue that prototypes should be explained. We improve interpretability by automatically enhancing visual prototypes with quantitative information about visual characteristics deemed important by the classification model. Specifically, our method clarifies the meaning of a prototype by quantifying the influence of colour hue, shape, texture, contrast and saturation and can generate both global and local explanations. Because of the generality of our approach, it can improve the interpretability of any similarity-based method for prototypical image recognition. In our experiments, we apply our method to the existing Prototypical Part Network (ProtoPNet). Our analysis confirms that the global explanations are generalisable, and often correspond to the visually perceptible properties of a prototype. Our explanations are especially relevant for prototypes which might have been interpreted incorrectly otherwise. By explaining such ‘misleading’ prototypes, we improve the interpretability and simulatability of a prototype-based classification model. We also use our method to check whether visually similar prototypes have similar explanations, and are able to discover redundancy. Code is available at https://github.com/M-Nauta/Explaining_Prototypes

    Interpretable Models via Pairwise Permutations Algorithm

    Full text link
    One of the most common pitfalls often found in high dimensional biological data sets are correlations between the features. This may lead to statistical and machine learning methodologies overvaluing or undervaluing these correlated predictors, while the truly relevant ones are ignored. In this paper, we will define a new method called pairwise permutation algorithm (PPA) with the aim of mitigating the correlation bias in feature importance values. Firstly, we provide a theoretical foundation, which builds upon previous work on permutation importance. PPA is then applied to a toy data set, where we demonstrate its ability to correct the correlation effect. We further test PPA on a microbiome shotgun dataset, to show that the PPA is already able to obtain biological relevant biomarkers

    Clinical features and prognostic factors of listeriosis: the MONALISA national prospective cohort study

    Full text link
    corecore