48 research outputs found

    The genetic architecture of the human cerebral cortex

    Get PDF
    The cerebral cortex underlies our complex cognitive capabilities, yet little is known about the specific genetic loci that influence human cortical structure. To identify genetic variants that affect cortical structure, we conducted a genome-wide association meta-analysis of brain magnetic resonance imaging data from 51,665 individuals. We analyzed the surface area and average thickness of the whole cortex and 34 regions with known functional specializations. We identified 199 significant loci and found significant enrichment for loci influencing total surface area within regulatory elements that are active during prenatal cortical development, supporting the radial unit hypothesis. Loci that affect regional surface area cluster near genes in Wnt signaling pathways, which influence progenitor expansion and areal identity. Variation in cortical structure is genetically correlated with cognitive function, Parkinson's disease, insomnia, depression, neuroticism, and attention deficit hyperactivity disorder

    Albiglutide and cardiovascular outcomes in patients with type 2 diabetes and cardiovascular disease (Harmony Outcomes): a double-blind, randomised placebo-controlled trial

    Get PDF
    Background: Glucagon-like peptide 1 receptor agonists differ in chemical structure, duration of action, and in their effects on clinical outcomes. The cardiovascular effects of once-weekly albiglutide in type 2 diabetes are unknown. We aimed to determine the safety and efficacy of albiglutide in preventing cardiovascular death, myocardial infarction, or stroke. Methods: We did a double-blind, randomised, placebo-controlled trial in 610 sites across 28 countries. We randomly assigned patients aged 40 years and older with type 2 diabetes and cardiovascular disease (at a 1:1 ratio) to groups that either received a subcutaneous injection of albiglutide (30–50 mg, based on glycaemic response and tolerability) or of a matched volume of placebo once a week, in addition to their standard care. Investigators used an interactive voice or web response system to obtain treatment assignment, and patients and all study investigators were masked to their treatment allocation. We hypothesised that albiglutide would be non-inferior to placebo for the primary outcome of the first occurrence of cardiovascular death, myocardial infarction, or stroke, which was assessed in the intention-to-treat population. If non-inferiority was confirmed by an upper limit of the 95% CI for a hazard ratio of less than 1·30, closed testing for superiority was prespecified. This study is registered with ClinicalTrials.gov, number NCT02465515. Findings: Patients were screened between July 1, 2015, and Nov 24, 2016. 10 793 patients were screened and 9463 participants were enrolled and randomly assigned to groups: 4731 patients were assigned to receive albiglutide and 4732 patients to receive placebo. On Nov 8, 2017, it was determined that 611 primary endpoints and a median follow-up of at least 1·5 years had accrued, and participants returned for a final visit and discontinuation from study treatment; the last patient visit was on March 12, 2018. These 9463 patients, the intention-to-treat population, were evaluated for a median duration of 1·6 years and were assessed for the primary outcome. The primary composite outcome occurred in 338 (7%) of 4731 patients at an incidence rate of 4·6 events per 100 person-years in the albiglutide group and in 428 (9%) of 4732 patients at an incidence rate of 5·9 events per 100 person-years in the placebo group (hazard ratio 0·78, 95% CI 0·68–0·90), which indicated that albiglutide was superior to placebo (p<0·0001 for non-inferiority; p=0·0006 for superiority). The incidence of acute pancreatitis (ten patients in the albiglutide group and seven patients in the placebo group), pancreatic cancer (six patients in the albiglutide group and five patients in the placebo group), medullary thyroid carcinoma (zero patients in both groups), and other serious adverse events did not differ between the two groups. There were three (<1%) deaths in the placebo group that were assessed by investigators, who were masked to study drug assignment, to be treatment-related and two (<1%) deaths in the albiglutide group. Interpretation: In patients with type 2 diabetes and cardiovascular disease, albiglutide was superior to placebo with respect to major adverse cardiovascular events. Evidence-based glucagon-like peptide 1 receptor agonists should therefore be considered as part of a comprehensive strategy to reduce the risk of cardiovascular events in patients with type 2 diabetes. Funding: GlaxoSmithKline

    Dissecting the Shared Genetic Architecture of Suicide Attempt, Psychiatric Disorders, and Known Risk Factors

    Get PDF
    Background Suicide is a leading cause of death worldwide, and nonfatal suicide attempts, which occur far more frequently, are a major source of disability and social and economic burden. Both have substantial genetic etiology, which is partially shared and partially distinct from that of related psychiatric disorders. Methods We conducted a genome-wide association study (GWAS) of 29,782 suicide attempt (SA) cases and 519,961 controls in the International Suicide Genetics Consortium (ISGC). The GWAS of SA was conditioned on psychiatric disorders using GWAS summary statistics via multitrait-based conditional and joint analysis, to remove genetic effects on SA mediated by psychiatric disorders. We investigated the shared and divergent genetic architectures of SA, psychiatric disorders, and other known risk factors. Results Two loci reached genome-wide significance for SA: the major histocompatibility complex and an intergenic locus on chromosome 7, the latter of which remained associated with SA after conditioning on psychiatric disorders and replicated in an independent cohort from the Million Veteran Program. This locus has been implicated in risk-taking behavior, smoking, and insomnia. SA showed strong genetic correlation with psychiatric disorders, particularly major depression, and also with smoking, pain, risk-taking behavior, sleep disturbances, lower educational attainment, reproductive traits, lower socioeconomic status, and poorer general health. After conditioning on psychiatric disorders, the genetic correlations between SA and psychiatric disorders decreased, whereas those with nonpsychiatric traits remained largely unchanged. Conclusions Our results identify a risk locus that contributes more strongly to SA than other phenotypes and suggest a shared underlying biology between SA and known risk factors that is not mediated by psychiatric disorders.Peer reviewe

    Critical empirical study on black-box explanations in AI

    No full text
    International audienceThis paper provides empirical concerns about post-hoc explanations of black-box ML models, one of the major trends in AI explainability (XAI), by showing its lack of interpretability and societal consequences. Using a representative consumer panel to test our assumptions, we report three main findings. First, we show that post-hoc explanations of black-box model tend to give partial and biased information on the underlying mechanism of the algorithm and can be subject to manipulation or information withholding by diverting users' attention. Secondly, we show the importance of tested behavioral indicators, in addition to self-reported perceived indicators, to provide a more comprehensive view of the dimensions of interpretability. This paper contributes to shedding new light on the actual theoretical debate between intrinsically transparent AI models and post-hoc explanations of black-box complex models-a debate which is likely to play a highly influential role in the future development and operationalization of AI systems

    Formaliser l'équité en Machine Learning. Revue des méthodes de "fairness" en apprentissage supervisé

    No full text
    International audienceThe decisions resulting from supervised learning algorithms are coming from historical databases. One of the major ethical problems posed by Machine Learning algorithms is fairness of the decision with respect to several groups of the population. In this poster, we present the sources of bias, the metrics of fairness and the methods of bias mitagation in Machine Learning.Les dĂ©cisions issues des algorithmes d’apprentissage supervisĂ© s’adaptent Ă  partir d’un historique d’exemples. Un des problĂšmes Ă©thiques majeurs posĂ©s par les algorithmes du Machine Learning est celui de l’équitĂ© de la dĂ©cision vis-Ă -vis de certains groupes de la population

    Some critical and ethical perspectives on the empirical turn of AI interpretability

    No full text
    We consider two fundamental and related issues currently faced by Artificial Intelligence (AI) development: the lack of ethics and interpretability of AI decisions. Can interpretable AI decisions help to address ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power. Under certain conditions, interpretability tools are therefore not means but, paradoxically, obstacles to the production of ethical AI since they can give the illusion of being sensitive to ethical incidents. We also show that the denunciatory power of AI explanations is highly dependent on the context in which the explanation takes place, such as the gender or education level of the person to whom the explication is intended for. AI ethics tools are therefore sometimes too flexible and self-regulation through the liberal production of explanations do not seem to be enough to address ethical issues. We then propose two scenarios for the future development of ethical AI: more external regulation or more liberalization of AI explanations. These two opposite paths will play a major role on the future development of ethical AI

    A Critical Empirical Study of Black-box Explanations in AI

    No full text
    This paper provides empirical concerns about post-hoc explanations of black-box ML models, one of the major trends in AI explainability (XAI), by showing its lack of interpretability and societal consequences. Using a representative consumer panel to test our assumptions, we report three main findings. First, we show that post-hoc explanations of black-box model tend to give partial and biased information on the underlying mechanism of the algorithm and can be subject to manipulation or information withholding by diverting users’ attention. Secondly, we show the importance of tested behavioral indicators, in addition to self-reported perceived indicators, to provide a more comprehensive view of the dimensions of interpretability. This paper contributes to shedding new light on the actual theoretical debate between intrinsically transparent AI models and post-hoc explanations of black-box complex models – a debate which is likely to play a highly influential role in the future development and operationalization of AI systems

    Un cadre d’autorĂ©gulation pour l’éthique de L’IA : opportunitĂ©s et dĂ©fis

    No full text
    FNEGE 4International audienceWe propose a self-regulatory tool for AI design that integrates societal metrics such as fairness, interpretability, and privacy. To do so, we create an interface that allows data scientists to visually choose the Machine Learning (ML) algorithm that best fits the AI designers’ ethical preferences. Using a Design Science methodology, we test the artifact on data scientist users and show that the interface is easy to use, gives a better understanding of the ethical issues of AI, generates debate, makes the algorithms more ethical, and is operational for decision-making. Our first contribution is to build a bottom-up AI regulation tool that integrates not only users’ ethical preferences, but also the singularities of the practical case learned by the algorithm. The method is independent of ML use cases and ML learning procedures. Our second contribution is to show that data scientists can select freely to sacrifice some performance to reach more ethical algorithms if they use appropriate regulatory tools. We then provide the conditions under which this technical and self-regulatory approach can fail. This paper shows how it is possible to bridge the gap between theories and practices in AI Ethics using flexible and bottom-up tools.Proponemos una herramienta de autorregulaciĂłn para el diseño de IA que integra mĂ©tricas sociales como la equidad, la interpretabilidad y la privacidad. Para ello, creamos una interfaz que permite a los cientĂ­ficos de datos elegir visualmente el algoritmo de aprendizaje automĂĄtico (ML) que mejor se adapte a las preferencias Ă©ticas de los diseñadores de IA. Utilizando una metodologĂ­a de Design Science, probamos el artefacto con usuarios cientĂ­ficos de datos y demostramos que la interfaz es fĂĄcil de usar, permite comprender mejor las cuestiones Ă©ticas de la IA, genera debate, hace que los algoritmos sean mĂĄs Ă©ticos y es operativa para la toma de decisiones. Nuestra primera contribuciĂłn es construir una herramienta de regulaciĂłn ascendente de la IA que integre no sĂłlo las preferencias Ă©ticas de los usuarios, sino tambiĂ©n las singularidades del caso prĂĄctico aprendido por el algoritmo. El mĂ©todo es independiente de los casos de uso de ML y de los procedimientos de aprendizaje de ML. Nuestra segunda contribuciĂłn es demostrar que los cientĂ­ficos de datos pueden elegir libremente sacrificar parte del rendimiento para conseguir algoritmos mĂĄs Ă©ticos si utilizan herramientas de regulaciĂłn adecuadas. A continuaciĂłn, proporcionamos las condiciones en las que este enfoque tĂ©cnico y autorregulador puede fallar. Este documento muestra cĂłmo es posible salvar la brecha entre las teorĂ­as y las prĂĄcticas en la Ă©tica de la IA utilizando herramientas flexibles y ascendentes.Nous proposons un outil d’autorĂ©gulation pour la conception d’IA qui intĂšgre des mesures sociĂ©tales telles que l’équitĂ©, l’interprĂ©tabilitĂ© et la confidentialitĂ©. Pour ce faire, nous crĂ©ons une interface qui permet aux praticiens de l’IA (experts en mĂ©gadonnĂ©es/data scientists) de choisir visuellement l’algorithme d’apprentissage (Machine Learning/ML) qui correspond le mieux aux prĂ©fĂ©rences Ă©thiques des concepteurs d’IA. En utilisant une mĂ©thodologie de conception en design science (science du design), nous testons l’artefact sur des data scientists et montrons que l’interface est facile Ă  utiliser, permet de mieux comprendre les enjeux Ă©thiques de l’IA, gĂ©nĂšre des dĂ©bats, rend les algorithmes plus Ă©thiques et est opĂ©rationnelle pour la prise de dĂ©cision. Notre premiĂšre contribution est de construire un outil de rĂ©gulation de l’IA qui intĂšgre non seulement les prĂ©fĂ©rences Ă©thiques des utilisateurs, mais aussi les singularitĂ©s du cas pratique appris par l’algorithme. La mĂ©thode est indĂ©pendante des cas d’utilisation et des procĂ©dures d’apprentissage ML. Notre deuxiĂšme contribution est de montrer que les data scientists peuvent choisir librement de sacrifier certaines performances pour atteindre des algorithmes plus Ă©thiques, Ă  condition d’utiliser des outils rĂ©glementaires appropriĂ©s. Nous fournissons ensuite les conditions dans lesquelles cette approche technique et autorĂ©gulatrice peut Ă©chouer. Cet article montre comment il est possible de combler le fossĂ© entre les thĂ©ories et les pratiques en matiĂšre d’éthique de l’IA Ă  l’aide d’outils flexibles qui prennent en compte les singularitĂ©s des cas pratiques

    Interprétabilité en Machine Learning, revue de littérature et perspectives

    No full text
    International audienceMachine Learning algorithms, and particularly deep neural networks, have strong predictive performance in recent years in many areas such as image recognition, text and speech analysis. Nevertheless, these good predictive results come genrally with the difficulty in interpreting the model generation process on the one hand and the learned dĂ©cision on the other hand. There are many tools for interpreting Machine Learning ranging from local, global, intrinsic, post-hoc, agnostic, specific methods or methods for visualizing parts of the input to parts of the algorithm. The current literature mixes the different methods, the interactivity between these methods becoming the key to interpretation. In the same way that there is a multiplicity of definitions of interpretability depending on the context of use (Doshi-Velez and Kim 2017), there is also a multiplicity of methods and tools for interpreting so-called "black box" algorithms.Les algorithmes d’apprentissage automatique, et particuliĂšrement les rĂ©seaux de neurones profonds, connaissent ces derniĂšres annĂ©es de fortes performances prĂ©dictives dans de nombreux domaines tels que la reconnaissance d’images, l’analyse textuelle ou vocale. NĂ©anmoins, ces bons rĂ©sultats prĂ©dictifs s’accompagnent gĂ©nĂ©ralement d’une difficultĂ© Ă  interprĂ©ter d’une part le processus de gĂ©nĂ©ration du modĂšle et d’autre part le rĂ©sultat appris. Il existe de nombreux outils pour interprĂ©ter le Machine Learning allant de mĂ©thodes locales, globales, intrinsĂšques, post-hoc, agnostiques, spĂ©cifiques ou des mĂ©thodes de visualisation d’une partie de l’input Ă  des parties de l’algorithme. La littĂ©rature actuelle semble manifester une volontĂ© de mixer les diffĂ©rente mĂ©thodes, l’interactivitĂ© entre ces mĂ©thodes devenant la clĂ© de l’interprĂ©tation. De la mĂȘme maniĂšre oĂč il existe une multiplicitĂ© de dĂ©finitions de l’interprĂ©tabilitĂ© selon le contexte d’utilisation (Doshi-Velez et Kim 2017), il existe Ă©galement une multiplicitĂ© de mĂ©thodes et outils pour interprĂ©ter les algorithmes dits « boĂźte noire »

    Some critical and ethical perspectives on the empirical turn of AI interpretability

    No full text
    CNRS 2, FNEGE 2, HCERES A, ABS 3International audienceWe consider two fundamental and related issues currently facing the development of Artificial Intelligence (AI): the lack of ethics, and the interpretability of AI decisions. Can interpretable AI decisions help to address the issue of ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power. Under certain conditions, interpretability tools are therefore not means but, paradoxically, obstacles to the production of ethical AI since they can give the illusion of being sensitive to ethical incidents. We also show that the denunciatory power of AI explanations is highly dependent on the context in which the explanation takes place, such as the gender or education of the person for whom the explication is intended. AI ethics tools are therefore sometimes too flexible and self-regulation through the liberal production of explanations does not seem to be enough to resolve ethical issues. By following an STS pragmatist program, we highlight the role of non-human actors (such as computational paradigms, testing environments, etc.) in the formation of structural power relations, such as sexism. We then propose two scenarios for the future development of ethical AI: more external regulation, or more liberalization of AI explanations. These two opposite paths will play a major role in the future development of ethical AI
    corecore