29,440 research outputs found

    Artificial Intelligence and Patient-Centered Decision-Making

    Get PDF
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient

    Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

    Full text link
    ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.Comment: 10 page

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data

    The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

    Get PDF
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature

    Ethical Implications of Predictive Risk Intelligence

    Get PDF
    open access articleThis paper presents a case study on the ethical issues that relate to the use of Smart Information Systems (SIS) in predictive risk intelligence. The case study is based on a company that is using SIS to provide predictive risk intelligence in supply chain management (SCM), insurance, finance and sustainability. The pa-per covers an assessment of how the company recognises ethical concerns related to SIS and the ways it deals with them. Data was collected through a document review and two in-depth semi-structured interviews. Results from the case study indicate that the main ethical concerns with the use of SIS in predictive risk intelli-gence include protection of the data being used in predicting risk, data privacy and consent from those whose data has been collected from data providers such as so-cial media sites. Also, there are issues relating to the transparency and accountabil-ity of processes used in predictive intelligence. The interviews highlighted the issue of bias in using the SIS for making predictions for specific target clients. The last ethical issue was related to trust and accuracy of the predictions of the SIS. In re-sponse to these issues, the company has put in place different mechanisms to ensure responsible innovation through what it calls Responsible Data Science. Under Re-sponsible Data Science, the identified ethical issues are addressed by following a code of ethics, engaging with stakeholders and ethics committees. This paper is important because it provides lessons for the responsible implementation of SIS in industry, particularly for start-ups. The paper acknowledges ethical issues with the use of SIS in predictive risk intelligence and suggests that ethics should be a central consideration for companies and individuals developing SIS to create meaningful positive change for society
    • …
    corecore