9 research outputs found
Vers la triple liaison germanium-carbone, les germynes
TOULOUSE3-BU Sciences (315552104) / SudocSudocFranceF
Actualités dans la sclérose en plaques
AIX-MARSEILLE2-BU Pharmacie (130552105) / SudocSudocFranceF
Ionic Schiff base dioxidomolybdenum(VI) complexes as catalysts in ionic liquid media for cyclooctene epoxidation
International audienceThe preparation of several new molybdenum(VI) complexes containing Schiff base ligands tagged with sulfonate functionalities is presented. The title compounds have been characterized by standard analytical methods including NMR, IR, and mass spectroscopy. X-ray structures of the N-salicylidene-2-aminophenolate dioxidomolybdenum complexes bearing sulfonato groups on the salicylidene moiety (as the sodium and ammonium salts) are described herein accompanied by the structural characterization of the N-salicylidene-2-aminoethanolate sulfonate Schiff base ligand. The activity of the ionic catalysts for cyclooctene epoxidation is shown in different room temperature ionic liquids (i.e. [BMIM][NTf2] , [BMIM][CF3SO3], [EMIM][CH3C6H4SO3])
Error Variance, Fairness, and the Curse on Minorities
Machine learning systems can make more errors for certain populations and not others, and thus create discriminations. To assess such fairness issue, errors are typically compared across populations. We argue that we also need to account for the variability of errors in practice, as the errors measured in test data may not be exactly the same in real-life data (called target data). We first introduce statistical methods for estimating random error variance in machine learning problems. The methods estimate how often errors would exceed certain magnitudes, and how often the errors of a population would exceed that of another (e.g., by more than a certain range). The methods are based on well-established sampling theory, and the recently introduced Sample-to-Sample estimation. The latter shows that small target samples yield high error variance, even if the test sample is very large. We demonstrate that, in practice, minorities are bound to bear higher variance, thus amplified error and bias. This can occur even if the test and training sets are accurate, representative, and extremely large. We call this statistical phenomenon the curse on minorities, and we show examples of its impact with basic classification and regression problems. Finally, we outline potential approaches to protect minorities from such curse, and to develop variance-aware fairness assessments
This Looks Like That, Because.. Explaining Prototypes for Interpretable Image Recognition
Image recognition with prototypes is considered an interpretable alternative for black box deep learning models. Classification depends on the extent to which a test image “looks like” a prototype. However, perceptual similarity for humans can be different from the similarity learned by the classification model. Hence, only visualising prototypes can be insufficient for a user to understand what a prototype exactly represents, and why the model considers a prototype and an image to be similar. We address this ambiguity and argue that prototypes should be explained. We improve interpretability by automatically enhancing visual prototypes with quantitative information about visual characteristics deemed important by the classification model. Specifically, our method clarifies the meaning of a prototype by quantifying the influence of colour hue, shape, texture, contrast and saturation and can generate both global and local explanations. Because of the generality of our approach, it can improve the interpretability of any similarity-based method for prototypical image recognition. In our experiments, we apply our method to the existing Prototypical Part Network (ProtoPNet). Our analysis confirms that the global explanations are generalisable, and often correspond to the visually perceptible properties of a prototype. Our explanations are especially relevant for prototypes which might have been interpreted incorrectly otherwise. By explaining such ‘misleading’ prototypes, we improve the interpretability and simulatability of a prototype-based classification model. We also use our method to check whether visually similar prototypes have similar explanations, and are able to discover redundancy. Code is available at https://github.com/M-Nauta/Explaining_Prototypes
Interpretable Models via Pairwise Permutations Algorithm
One of the most common pitfalls often found in high dimensional biological data sets are correlations between the features. This may lead to statistical and machine learning methodologies overvaluing or undervaluing these correlated predictors, while the truly relevant ones are ignored. In this paper, we will define a new method called pairwise permutation algorithm (PPA) with the aim of mitigating the correlation bias in feature importance values. Firstly, we provide a theoretical foundation, which builds upon previous work on permutation importance. PPA is then applied to a toy data set, where we demonstrate its ability to correct the correlation effect. We further test PPA on a microbiome shotgun dataset, to show that the PPA is already able to obtain biological relevant biomarkers