10 research outputs found

    Reconsidering the concept of ‘dual-use’ in the context of neuroscience research

    Get PDF
    Commentaire / CommentaryLe concept de « double usage » rĂ©fĂšre habituellement aux recherches ayant des applications Ă  la fois civiles (par exemple, thĂ©rapeutiques) et militaires. J’avance que cette dichotomie peut et devrait ĂȘtre reconsidĂ©rĂ©e et qu’ainsi le concept de double usage pourra aider Ă  faire l’examen d’autres potentiels mĂ©susages, comme la neuroamĂ©lioration ou le neuromarketing.The concept of ‘dual-use’ research usually refers to research with both civilian (e.g., therapeutic) and military applications. I argue here that this dichotomy can and should be reconsidered and thus that the concept of dual-use can be helpful in examining other potential misuses, such as neuroenhancement or neuromarketing

    Le concept de double-usage de la recherche : un outil diagnostique de l’analyse Ă©thique des risques associĂ©s aux usages neuroamĂ©lioratifs de la recherche en neurosciences

    Get PDF
    Les technologies de stimulations transcrĂąniennes – tel que la tDCS ou la TMS – prĂ©sentent Ă  l’heure actuelle d’intĂ©ressantes perspectives thĂ©rapeutiques, tout comme diverses amĂ©liorations cognitives chez le sujet « non-malade » dont dĂ©coulent des applications neuroamĂ©lioratives, plus ou moins imminentes, en dehors du cadre clinique ou investigatoire. Est proposĂ© ici d’analyser les risques associĂ©s Ă  ces applications, dĂ©tournĂ©es des objectifs premiers de recherche, et aux prĂ©occupations Ă©thiques qui les accompagnent (autonomie, justice, intĂ©gritĂ© physique), via un concept gĂ©nĂ©ralement associĂ© aux recherches avec des perspectives de sĂ©curitĂ© nationale et associĂ©es Ă  un niveau de risque Ă©levĂ©. RĂ©visant la trivialitĂ© d’une dĂ©finition dichotomique aux usages « bons » et « mauvais », est proposĂ© d’étendre le concept de « double-usage » pour l’appliquer Ă  la neuroamĂ©lioration comme un mĂ©susage de la recherche en neurosciences. Faisant rĂ©fĂ©rence au conflit entre, d’une part, le respect de la libertĂ© acadĂ©mique et, d’autre part, la protection de la sĂ©curitĂ© et de la santĂ© publique, ce concept s’avĂšre ĂȘtre un outil diagnostique pertinent pour l’évaluation des risques associĂ©s Ă  l’usage mĂ©lioratif desdites technologies, et plus particuliĂšrement de la tDCS, afin d’alimenter la rĂ©flexion sur la rĂ©gulation de ces dispositifs en amont de leur utilisation, selon un principe de prĂ©caution inhĂ©rent au double-usage de la recherche. Ce concept permet ainsi de rĂ©flĂ©chir Ă  la mise en place d’une gouvernance proactive et contextualisĂ©e impliquant une responsabilitĂ© partagĂ©e d’un large panel d’acteurs, nĂ©cessaire au vu des avancĂ©es rapides du domaine des neurosciences et de l’imminence de l’arrivĂ©e sur le marchĂ© de ces dispositifs.Transcranial stimulation technologies – such as tDCS and TMS – currently provide promising therapeutic outcomes, as well as various cognitive improvements in healthy individuals, leading to different and relatively prospective neuroenhancement applications outside clinical or research contexts. In this thesis, a concept that has typically been associated with research regarding national security implications and prospects associated with a high level of risk – i.e., the concept of “dual-use” – will be deployed to analyze the risks of neuroscience applications being diverted from their primary research objectives, along with the related ethical concerns (e.g., autonomy, justice, physical integrity). By revising the dichotomous definition of dual-use research as involving either ‘good’ or ‘bad’ uses, I propose to extend the concept in order to consider neuroenhancement as a misuse of neuroscience research, with reference to the conflict between, on the one hand, protecting academic freedom and progress, and on the other, promoting security and public health. This concept is a pertinent diagnostic tool for the evaluation of risks associated with a neuroenhancement use of those technologies – and more especially tDCS – when considering how best to regulate these devices prior to the appearance of their utilisation, due to the precautionary principle inherent in dual-use research. This concept can also help to set out proactive and contextualized governance mechanisms based on the shared responsibility of a broad range of stakeholders, something that is necessary given the rapid advances in neuroscience research and the imminence of such devices coming onto the market

    SystĂšmes d’intelligence artificielle et santĂ© : les enjeux d’une innovation responsable.

    Full text link
    L’avĂšnement de l’utilisation de systĂšmes d’intelligence artificielle (IA) en santĂ© s’inscrit dans le cadre d’une nouvelle mĂ©decine « haute dĂ©finition » qui se veut prĂ©dictive, prĂ©ventive et personnalisĂ©e en tirant partie d’une quantitĂ© inĂ©dite de donnĂ©es aujourd’hui disponibles. Au cƓur de l’innovation numĂ©rique en santĂ©, le dĂ©veloppement de systĂšmes d’IA est Ă  la base d’un systĂšme de santĂ© interconnectĂ© et auto-apprenant qui permettrait, entre autres, de redĂ©finir la classification des maladies, de gĂ©nĂ©rer de nouvelles connaissances mĂ©dicales, ou de prĂ©dire les trajectoires de santĂ© des individus en vue d’une meilleure prĂ©vention. DiffĂ©rentes applications en santĂ© de la recherche en IA sont envisagĂ©es, allant de l’aide Ă  la dĂ©cision mĂ©dicale par des systĂšmes experts Ă  la mĂ©decine de prĂ©cision (ex. ciblage pharmacologique), en passant par la prĂ©vention individualisĂ©e grĂące Ă  des trajectoires de santĂ© Ă©laborĂ©es sur la base de marqueurs biologiques. Des prĂ©occupations Ă©thiques pressantes relatives Ă  l’impact de l’IA sur nos sociĂ©tĂ©s Ă©mergent avec le recours grandissant aux algorithmes pour analyser un nombre croissant de donnĂ©es relatives Ă  la santĂ© (souvent personnelles, sinon sensibles) ainsi que la rĂ©duction de la supervision humaine de nombreux processus automatisĂ©s. Les limites de l’analyse des donnĂ©es massives, la nĂ©cessitĂ© de partage et l’opacitĂ© des dĂ©cisions algorithmiques sont Ă  la source de diffĂ©rentes prĂ©occupations Ă©thiques relatives Ă  la protection de la vie privĂ©e et de l’intimitĂ©, au consentement libre et Ă©clairĂ©, Ă  la justice sociale, Ă  la dĂ©shumanisation des soins et du patient, ou encore Ă  la sĂ©curitĂ©. Pour rĂ©pondre Ă  ces enjeux, de nombreuses initiatives se sont penchĂ©es sur la dĂ©finition et l’application de principes directeurs en vue d’une gouvernance Ă©thique de l’IA. L’opĂ©rationnalisation de ces principes s’accompagne cependant de diffĂ©rentes difficultĂ©s de l’éthique appliquĂ©e, tant relatives Ă  la portĂ©e (universelle ou plurielle) desdits principes qu’à la façon de les mettre en pratique (des mĂ©thodes inductives ou dĂ©ductives). S’il semble que ces difficultĂ©s trouvent des rĂ©ponses dans la dĂ©marche Ă©thique (soit une approche sensible aux contextes d’application), cette maniĂšre de faire se heurte Ă  diffĂ©rents dĂ©fis. L’analyse des craintes et des attentes citoyennes qui Ă©manent des discussions ayant eu lieu lors de la coconstruction de la DĂ©claration de MontrĂ©al relativement au dĂ©veloppement responsable de l’IA permet d’en dessiner les contours. Cette analyse a permis de mettre en Ă©vidence trois principaux dĂ©fis relatifs Ă  l’exercice de la responsabilitĂ© qui pourrait nuire Ă  la mise en place d’une gouvernance Ă©thique de l’IA en santĂ© : l’incapacitation des professionnels de santĂ© et des patients, le problĂšme des mains multiples et l’agentivitĂ© artificielle. Ces dĂ©fis demandent de se pencher sur la crĂ©ation de systĂšmes d’IA capacitants et de prĂ©server l’agentivitĂ© humaine afin de favoriser le dĂ©veloppement d’une responsabilitĂ© (pragmatique) partagĂ©e entre les diffĂ©rentes parties prenantes du dĂ©veloppement des systĂšmes d’IA en santĂ©. RĂ©pondre Ă  ces diffĂ©rents dĂ©fis est essentiel afin d’adapter les mĂ©canismes de gouvernance existants et de permettre le dĂ©veloppement d’une innovation numĂ©rique en santĂ© responsable, qui doit garder l’humain au centre de ses dĂ©veloppements.The use of artificial intelligence (AI) systems in health is part of the advent of a new "high definition" medicine that is predictive, preventive and personalized, benefiting from the unprecedented amount of data that is today available. At the heart of digital health innovation, the development of AI systems promises to lead to an interconnected and self-learning healthcare system. AI systems could thus help to redefine the classification of diseases, generate new medical knowledge, or predict the health trajectories of individuals for prevention purposes. Today, various applications in healthcare are being considered, ranging from assistance to medical decision-making through expert systems to precision medicine (e.g. pharmacological targeting), as well as individualized prevention through health trajectories developed on the basis of biological markers. However, urgent ethical concerns emerge with the increasing use of algorithms to analyze a growing number of data related to health (often personal and sensitive) as well as the reduction of human intervention in many automated processes. From the limitations of big data analysis, the need for data sharing and the algorithmic decision ‘opacity’ stems various ethical concerns relating to the protection of privacy and intimacy, free and informed consent, social justice, dehumanization of care and patients, and/or security. To address these challenges, many initiatives have focused on defining and applying principles for an ethical governance of AI. However, the operationalization of these principles faces various difficulties inherent to applied ethics, which originate either from the scope (universal or plural) of these principles or the way these principles are put into practice (inductive or deductive methods). These issues can be addressed with context-specific or bottom-up approaches of applied ethics. However, people who embrace these approaches still face several challenges. From an analysis of citizens' fears and expectations emerging from the discussions that took place during the coconstruction of the Montreal Declaration for a Responsible Development of AI, it is possible to get a sense of what these difficulties look like. From this analysis, three main challenges emerge: the incapacitation of health professionals and patients, the many hands problem, and artificial agency. These challenges call for AI systems that empower people and that allow to maintain human agency, in order to foster the development of (pragmatic) shared responsibility among the various stakeholders involved in the development of healthcare AI systems. Meeting these challenges is essential in order to adapt existing governance mechanisms and enable the development of a responsible digital innovation in healthcare and research that allows human beings to remain at the center of its development

    Responsible conduct of research-creation: a portrait of an uncharted field of research

    Full text link
    Responsible conduct of research (RCR) is ubiquitous, and present in most areas of research. One area that has received little attention is Research-Creation (RC): ‱ an emergent eld at the interface of academic research and creative activities ‱ in Quebec, Canada, RC is de ned as “research activities or approaches that foster the creation or interpretation/ performance of literary or artistic works of all types” Researcher-Creators – who are at the same time researchers and practising artists, musicians, or designers – may be faced with very di erent issues or challenges from colleagues in the rest of academia. ‱ How are RCR issues are articulated in RC? ‱ How does the heterogeneous RC community responds to institutional policies or provincial/national RCR guidelines? This review aimed to identify and categorize RCR issues, and RC-speci c factors.FRQ Action concertĂ©

    Responsible Conduct of Research in Research-Creation: Moving into Uncharted Terrain

    Full text link
    Responsible conduct of research (RCR) is ubiquitous, and present in most areas of research. One area that has received little attention is Research-Creation (RC): ‱ an emergent eld at the interface of academic research and creative activities ‱ in Quebec, Canada, RC is de ned as “research activities or approaches that foster the creation or interpretation/ performance of literary or artistic works of all types” Researcher-Creators – who are at the same time researchers and practising artists, musicians, or designers – may be faced with very di erent issues or challenges from colleagues in the rest of academia. ‱ How do researcher-creators reconcile their dual obligations to creation and to research? ‱ Are the usual research ethics guidelines (e.g., TCPS2, ICH relevant and how do they apply? ‱ How do the creative/artistic dimensions of research a ect evaluations by grant committees and REBs? To better understand how RCR issues are articulated in the very heterogeneous RC community, we combine here results from a literature review and an international survey on RCR in RC.FRQ Action concertĂ©
    corecore