4,382 research outputs found

    Artificial Intelligence in the Medical System: Four Roles for Potential Transformation

    Get PDF
    Artificial intelligence (AI) looks to transform the practice of medicine. As academics and policymakers alike turn to legal questions, including how to ensure high-quality performance by medical AI, a threshold issue involves what role AI will play in the larger medical system. This Article argues that AI can play at least four distinct roles in the medical system, each potentially transformative: pushing the frontiers of medical knowledge to increase the limits of medical performance, democratizing medical expertise by making specialist skills more available to non-specialists, automating drudgery within the medical system, and allocating scarce medical resources. Each role raises its own challenges, and an understanding of the four roles is necessary to identify and address major hurdles to the responsible development and deployment of medical AI

    Artificial Intelligence in the Medical System: Four Roles for Potential Transformation

    Get PDF
    Artificial intelligence (AI) looks to transform the practice of medicine. As academics and policymakers alike turn to legal questions, a threshold issue involves what role AI will play in the larger medical system. This Article argues that AI can play at least four distinct roles in the medical system, each potentially transformative: pushing the frontiers of medical knowledge to increase the limits of medical performance, democratizing medical expertise by making specialist skills more available to non-specialists, automating drudgery within the medical system, and allocating scarce medical resources. Each role raises its own challenges, and an understanding of the four roles is necessary to identify and address major hurdles to the responsible development and deployment of medical AI

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data

    Automating autism: Disability, discourse, and Artificial Intelligence

    Get PDF
    As Artificial Intelligence (AI) systems shift to interact with new domains and populations, so does AI ethics: a relatively nascent subdiscipline that frequently concerns itself with questions of “fairness” and “accountability.” This fairness-centred approach has been criticized for (amongst other things) lacking the ability to address discursive, rather than distributional, injustices. In this paper I simultaneously validate these concerns, and work to correct the relative silence of both conventional and critical AI ethicists around disability, by exploring the narratives deployed by AI researchers in discussing and designing systems around autism. Demonstrating that these narratives frequently perpetuate a dangerously dehumanizing model of autistic people, I explore the material consequences this might have. More importantly, I highlight the ways in which discursive harms—particularly discursive harms around dehumanization—are not simply inadequately handled by conventional AI ethics approaches, but actively invisible to them. I urge AI ethicists to critically and immediately begin grappling with the likely consequences of an approach to ethics which focuses on personhood and agency, in a world in which many populations are treated as having neither. I suggest that this issue requires a substantial revisiting of the underlying premises of AI ethics, and point to some possible directions in which researchers and practitioners might look for inspiration

    Automating Society : Taking Stock of Automated Decision-Making in the EU

    Get PDF
    This is the first comprehensive study regarding the state of automated decision-making in Europe. Experts have looked at the situation at the EU level but also in 12 Member States: Belgium, Denmark, Finland, France, Germany, Italy, Netherlands Poland, Slovenia, Spain, Sweden and the UK. They assessed not only the political discussions and initiatives in these countries but also present a section "ADM in Action" for all states, listing examples of automated decision-making already in use

    Automating Society: Taking Stock of Automated Decision-Making in the EU. BertelsmannStiftung Studies 2019

    Get PDF
    Imagine you’re looking for a job. The company you are applying to says you can have a much easier application process if you provide them with your username and password for your personal email account. They can then just scan all your emails and develop a personality profile based on the result. No need to waste time filling out a boring questionnaire and, because it’s much harder to manipulate all your past emails than to try to give the ‘correct’ answers to a questionnaire, the results of the email scan will be much more accurate and truthful than any conventional personality profiling. Wouldn’t that be great? Everyone wins—the company looking for new personnel, because they can recruit people on the basis of more accurate profiles, you, because you save time and effort and don’t end up in a job you don’t like, and the company offering the profiling service because they have a cool new business model

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure
    • 

    corecore