12,601 research outputs found

    Ethical Implications of Predictive Risk Intelligence

    Get PDF
    open access articleThis paper presents a case study on the ethical issues that relate to the use of Smart Information Systems (SIS) in predictive risk intelligence. The case study is based on a company that is using SIS to provide predictive risk intelligence in supply chain management (SCM), insurance, finance and sustainability. The pa-per covers an assessment of how the company recognises ethical concerns related to SIS and the ways it deals with them. Data was collected through a document review and two in-depth semi-structured interviews. Results from the case study indicate that the main ethical concerns with the use of SIS in predictive risk intelli-gence include protection of the data being used in predicting risk, data privacy and consent from those whose data has been collected from data providers such as so-cial media sites. Also, there are issues relating to the transparency and accountabil-ity of processes used in predictive intelligence. The interviews highlighted the issue of bias in using the SIS for making predictions for specific target clients. The last ethical issue was related to trust and accuracy of the predictions of the SIS. In re-sponse to these issues, the company has put in place different mechanisms to ensure responsible innovation through what it calls Responsible Data Science. Under Re-sponsible Data Science, the identified ethical issues are addressed by following a code of ethics, engaging with stakeholders and ethics committees. This paper is important because it provides lessons for the responsible implementation of SIS in industry, particularly for start-ups. The paper acknowledges ethical issues with the use of SIS in predictive risk intelligence and suggests that ethics should be a central consideration for companies and individuals developing SIS to create meaningful positive change for society

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered
    • …
    corecore