21,448 research outputs found

    Explainable Text Classification in Legal Document Review A Case Study of Explainable Predictive Coding

    Full text link
    In today's legal environment, lawsuits and regulatory investigations require companies to embark upon increasingly intensive data-focused engagements to identify, collect and analyze large quantities of data. When documents are staged for review the process can require companies to dedicate an extraordinary level of resources, both with respect to human resources, but also with respect to the use of technology-based techniques to intelligently sift through data. For several years, attorneys have been using a variety of tools to conduct this exercise, and most recently, they are accepting the use of machine learning techniques like text classification to efficiently cull massive volumes of data to identify responsive documents for use in these matters. In recent years, a group of AI and Machine Learning researchers have been actively researching Explainable AI. In an explainable AI system, actions or decisions are human understandable. In typical legal `document review' scenarios, a document can be identified as responsive, as long as one or more of the text snippets in a document are deemed responsive. In these scenarios, if predictive coding can be used to locate these responsive snippets, then attorneys could easily evaluate the model's document classification decision. When deployed with defined and explainable results, predictive coding can drastically enhance the overall quality and speed of the document review process by reducing the time it takes to review documents. The authors of this paper propose the concept of explainable predictive coding and simple explainable predictive coding methods to locate responsive snippets within responsive documents. We also report our preliminary experimental results using the data from an actual legal matter that entailed this type of document review.Comment: 2018 IEEE International Conference on Big Dat

    Fairness as a Determinant of AI Adoption in Recruiting: An Interview-based Study

    Get PDF
    Traditional recruiting techniques are often characterized by discrimination as human recruiters make biased decisions. To increase fairness in human resource management (HRM), organizations are increasingly adopting AI-based methods. Especially recruiting processes are restructured in order to find promising talents for vacant job positions. However, use of AI in recruiting is a two-edged sword as the neutrality of AI-based decisions highly depends on the quality of the underlying data. In this research-in-progress, we develop a research model explaining AI adoption in recruiting by defining and considering fairness as a determinant. Based on 21 semi-structured interviews we identified dimensions of perceived fairness (diversity, ethics, discrimination and bias, explainable AI) thereby affecting AI adoption. The proposed model addresses research gaps in AI recruiting research in general and arising ethical questions concerning the use of AI in people management in general and recruiting process in particular. We also discuss implications for further research and next steps of this research in progress work

    Functional requirements to mitigate the Risk of Harm to Patients from Artificial Intelligence in Healthcare

    Full text link
    The Directorate General for Parliamentary Research Services of the European Parliament has prepared a report to the Members of the European Parliament where they enumerate seven main risks of Artificial Intelligence (AI) in medicine and healthcare: patient harm due to AI errors, misuse of medical AI tools, bias in AI and the perpetuation of existing inequities, lack of transparency, privacy and security issues, gaps in accountability, and obstacles in implementation. In this study, we propose fourteen functional requirements that AI systems may implement to reduce the risks associated with their medical purpose: AI passport, User management, Regulation check, Academic use only disclaimer, data quality assessment, Clinicians double check, Continuous performance evaluation, Audit trail, Continuous usability test, Review of retrospective/simulated cases, Bias check, eXplainable AI, Encryption and use of field-tested libraries, and Semantic interoperability. Our intention here is to provide specific high-level specifications of technical solutions to ensure continuous good performance and use of AI systems to benefit patients in compliance with the future EU regulatory framework.Comment: 14 pages, 1 figure, 1 tabl

    Artificial Intelligence System Development in terms of People-Process-Data-Technology (2PDT): Results from Government Case Studies

    Get PDF
    Artificial Intelligence (AI) System Development (SD) is an organisational activity characterised by diverse opinion and an industry approach with limited research reporting the outcome of proposed frameworks. This paper aims to analyse the outcome of a previously proposed framework in terms of people, process, data and technology (2PDT). We argue that organisations can improve their effectiveness by promoting an agile organisational culture, and better data management. The findings suggests that people and culture are critical in taking timely actions, and that use of quality data and development of ethical and explainable AI systems are required. In this paper, we evaluate the use of 2PDT by investigating nine case-studies and demonstrate its effectiveness in the formulation of research design, data collection and analysis. The findings highlight that the 2PDT offers a useful conceptual framework examining this phenomenon due to its characteristics of agility, rigour, dynamicity, and completeness

    Explaining RL Decisions with Trajectories

    Full text link
    Explanation is a key component for the adoption of reinforcement learning (RL) in many real-world decision-making problems. In the literature, the explanation is often provided by saliency attribution to the features of the RL agent's state. In this work, we propose a complementary approach to these explanations, particularly for offline RL, where we attribute the policy decisions of a trained RL agent to the trajectories encountered by it during training. To do so, we encode trajectories in offline training data individually as well as collectively (encoding a set of trajectories). We then attribute policy decisions to a set of trajectories in this encoded space by estimating the sensitivity of the decision with respect to that set. Further, we demonstrate the effectiveness of the proposed approach in terms of quality of attributions as well as practical scalability in diverse environments that involve both discrete and continuous state and action spaces such as grid-worlds, video games (Atari) and continuous control (MuJoCo). We also conduct a human study on a simple navigation task to observe how their understanding of the task compares with data attributed for a trained RL policy. Keywords -- Explainable AI, Verifiability of AI Decisions, Explainable RL.Comment: Published at International Conference on Learning Representations (ICLR), 202
    corecore