926 research outputs found

    European Union regulations on algorithmic decision-making and a "right to explanation"

    Get PDF
    We summarize the potential impact that the European Union's new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which "significantly affect" users. The law will also effectively create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.Comment: presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, N

    The Role of the Right to Explanation and Its Safeguards in the Realization of Trustworthy AI

    Get PDF
    This paper presents a relationship timeline diagram between the GDPR safeguards introduced to secure data subjects’ right to explanation and the ethical principles of the Trustworthy AI framework laid out by the High-Level Expert Group. To create the desired output, we initially analyze the articles of the GDPR that establishes the foundation of the right to explanation. Then, we cover the relevant safeguards enabled to secure the right to explanation that should be regarded as an umbrella concept. We analyze the seven ethical principles required for the realization of trustworthy AI and associate them with the relevant safeguards. Finally, a relationship timeline diagram is presented in which the relationship between the safeguards, the articles creating these safeguards, and the corresponding ethical principles protected with these safeguards are demonstrated

    Examination of Current AI Systems within the Scope of Right to Explanation and Designing Explainable AI Systems

    Get PDF
    This research aims to explore explainable artificial intelligence, a new sub field of artificial intelligence that is gaining importance in academic and business literature due to increased use of intelligent systems in our daily lives. As part of the research project, first of all, the necessity of the explainability in AI systems will be explained in terms of accountability, transparency, liability, and fundamental rights & freedoms. The latest explainable AI algorithms introduced by the AI researchers will be examined firstly from technical and then, from legal perspectives. Their statistical and legal competencies will be analyzed. After detecting the deficiencies of the current solutions, a comprehensive and technical AI system design will be proposed which satisfies not only the statistical requisites; but also the legal, ethical, and logical requisites

    DEEP LEARNING AND THE RIGHT TO EXPLANATION: TECHNOLOGICAL CHALLENGES TO LEGALITY AND DUE PROCESS OF LAW

    Get PDF
    This article studies the right to explainability, which is extremely important in times of fast technological evolution and use of deep learning for the most varied decision-making procedures based on personal data. Its main hypothesis is that the right to explanation is totally linked to the due process of Law and legality, being a safeguard for those who need to contest automatic decisions taken by algorithms, whether in judicial contexts, in general Public Administration contexts, or even in private entrepreneurial contexts.. Through hypothetical-deductive procedure method, qualitative and transdisciplinary approach, and bibliographic review technique, it was concluded that opacity, characteristic of the most complex systems of deep learning, can impair access to justice, due process legal and contradictory. In addition, it is important to develop strategies to overcome opacity through the work of experts, mainly (but not only). Finally, Brazilian LGPD provides for the right to explanation, but the lack of clarity in its text demands that the Judiciary and researchers also make efforts to better build its regulation

    Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation

    Get PDF
    Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’. Further, the ambiguity and limited scope of the ‘right not to be subject to automated decision-making’ contained in Article 22 (from which the alleged ‘right to explanation’ stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative and policy steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018

    THE RIGHT TO EXPLANATION IN THE PROCESSING OF PERSONAL DATA WITH THE USE OF AI SYSTEMS

    Get PDF
    Transparency is one of the basic principles enshrined in the General Data Protection Regulation (GDRP). Achieving transparency in automated decision-making processing especially when artificial intelligence (AI) is involved is a challenging task on many aspects. The opaqueness of AI systems that usually is referred as the “black-box” phenomenon is the main problem in having explainable and accountable AI. Computer scientists are working on explainable AI (XAI) technics in order to make AI more trustworthy. On the same vein, thus from a different perspective, the European legislator provides in the GDPR with a right to information when automated decision-making processing takes place. The data subject has the right to be informed on the logic involved and to challenge the automated decision-making. GDPR introduces, therefore, a sui generis right to explanation in automated decision-making process. Under this light, the paper analyzes the legal basis of this right and the technical barriers involved
    • 

    corecore