710 research outputs found

    Fine mapping of genes determining vicine-convicine concentration in faba bean

    Get PDF
    Faba bean (Vicia faba L.) is an annual herbaceous cool-season food legume widely cultivated worldwide, especially for its high seed protein content. However, its major limitation in being used as food and feed, is the presence of antinutritional factors in its seeds, especially vicine and convicine (VC), two related compounds, which may be harmful to livestock and G6PD-deficient humans. To remove VC, the most sustainable method is breeding for low-VC faba bean cultivars. To improve the efficiency and speed of breeding programs, breeders use marker-assisted selection (MAS). The identification of genes responsible for VC content allows the development of reliable DNA markers and a better understanding of the molecular basis of this trait. The major-effect QTL controlling VC content named “VC1”, was identified in faba bean chromosome 1, and a few minor-effect QTLs were detected in previous studies. Hence, a total of 165 RILs from the cross Mélodie/2 (low-VC) x ILB 938/2 (high-VC) were genotyped and evaluated for VC content. Composite interval mapping was run on R/qtl software with accurate phenotypic data associated with a high-density SNP-based genetic map. Results revealed two minor-effect QTLs in addition to VC1. One was on chromosome 4 and had about 15% effect on convicine content. The other was on chromosome 5 and had 15% effect on vicine and total VC content. This research also reports candidate genes for the newly detected minor-effect QTLs through comparative genomics with the Medicago truncatula genome. Hypotheses were proposed on the role of these candidate genes on the VC biosynthetic pathway or transportation into the embryo beans for further testing

    When Mitigating Bias is Unfair: A Comprehensive Study on the Impact of Bias Mitigation Algorithms

    Full text link
    Most works on the fairness of machine learning systems focus on the blind optimization of common fairness metrics, such as Demographic Parity and Equalized Odds. In this paper, we conduct a comparative study of several bias mitigation approaches to investigate their behaviors at a fine grain, the prediction level. Our objective is to characterize the differences between fair models obtained with different approaches. With comparable performances in fairness and accuracy, are the different bias mitigation approaches impacting a similar number of individuals? Do they mitigate bias in a similar way? Do they affect the same individuals when debiasing a model? Our findings show that bias mitigation approaches differ a lot in their strategies, both in the number of impacted individuals and the populations targeted. More surprisingly, we show these results even apply for several runs of the same mitigation approach. These findings raise questions about the limitations of the current group fairness metrics, as well as the arbitrariness, hence unfairness, of the whole debiasing process

    How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice

    Full text link
    Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness. Various and significantly different algorithmic methods to provide this explainability have been introduced in the field, but the existing literature in the machine learning community has paid little attention to the stakeholder whose needs are rather studied in the human-computer interface community. Therefore, organizations that want or need to provide this explainability are confronted with the selection of an appropriate method for their use case. In this paper, we argue there is a need for a methodology to bridge the gap between stakeholder needs and explanation methods. We present our ongoing work on creating this methodology to help data scientists in the process of providing explainability to stakeholders. In particular, our contributions include documents used to characterize XAI methods and user requirements (shown in Appendix), which our methodology builds upon

    On the Fairness ROAD: Robust Optimization for Adversarial Debiasing

    Full text link
    In the field of algorithmic fairness, significant attention has been put on group fairness criteria, such as Demographic Parity and Equalized Odds. Nevertheless, these objectives, measured as global averages, have raised concerns about persistent local disparities between sensitive groups. In this work, we address the problem of local fairness, which ensures that the predictor is unbiased not only in terms of expectations over the whole population, but also within any subregion of the feature space, unknown at training time. To enforce this objective, we introduce ROAD, a novel approach that leverages the Distributionally Robust Optimization (DRO) framework within a fair adversarial learning objective, where an adversary tries to infer the sensitive attribute from the predictions. Using an instance-level re-weighting strategy, ROAD is designed to prioritize inputs that are likely to be locally unfair, i.e. where the adversary faces the least difficulty in reconstructing the sensitive attribute. Numerical experiments demonstrate the effectiveness of our method: it achieves Pareto dominance with respect to local fairness and accuracy for a given global fairness level across three standard datasets, and also enhances fairness generalization under distribution shift.Comment: 23 pages, 10 figure

    Explaining Deep Learning Models with Constrained Adversarial Examples

    Get PDF
    Machine learning algorithms generally suffer from a problem of explainability. Given a classification result from a model, it is typically hard to determine what caused the decision to be made, and to give an informative explanation. We explore a new method of generating counterfactual explanations, which instead of explaining why a particular classification was made explain how a different outcome can be achieved. This gives the recipients of the explanation a better way to understand the outcome, and provides an actionable suggestion. We show that the introduced method of Constrained Adversarial Examples (CADEX) can be used in real world applications, and yields explanations which incorporate business or domain constraints such as handling categorical attributes and range constraints
    • …
    corecore