236 research outputs found

    Some HCI Priorities for GDPR-Compliant Machine Learning

    Get PDF
    In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems. Focussing on those areas that relate to algorithmic systems in society, we propose roles for HCI in legal contexts in relation to fairness, bias and discrimination; data protection by design; data protection impact assessments; transparency and explanations; the mitigation and understanding of automation bias; and the communication of envisaged consequences of processing.Comment: 8 pages, 0 figures, The General Data Protection Regulation: An Opportunity for the CHI Community? (CHI-GDPR 2018), Workshop at ACM CHI'18, 22 April 2018, Montreal, Canad

    Societal issues in machine learning: When learning from data is not enough

    Get PDF
    It has been argued that Artificial Intelligence (AI) is experiencing a fast process of commodification. Such characterization is on the interest of big IT companies, but it correctly reflects the current industrialization of AI. This phenomenon means that AI systems and products are reaching the society at large and, therefore, that societal issues related to the use of AI and Machine Learning (ML) cannot be ignored any longer. Designing ML models from this human-centered perspective means incorporating human-relevant requirements such as safety, fairness, privacy, and interpretability, but also considering broad societal issues such as ethics and legislation. These are essential aspects to foster the acceptance of ML-based technologies, as well as to ensure compliance with an evolving legislation concerning the impact of digital technologies on ethically and privacy sensitive matters. The ESANN special session for which this tutorial acts as an introduction aims to showcase the state of the art on these increasingly relevant topics among ML theoreticians and practitioners. For this purpose, we welcomed both solid contributions and preliminary relevant results showing the potential, the limitations and the challenges of new ideas, as well as refinements, or hybridizations among the different fields of research, ML and related approaches in facing real-world problems involving societal issues

    Societal issues concerning the application of artificial intelligence in medicine

    Get PDF
    Medicine is becoming an increasingly data-centred discipline and, beyond classical statistical approaches, artificial intelligence (AI) and, in particular, machine learning (ML) are attracting much interest for the analysis of medical data. It has been argued that AI is experiencing a fast process of commodification. This characterization correctly reflects the current process of industrialization of AI and its reach into society. Therefore, societal issues related to the use of AI and ML should not be ignored any longer and certainly not in the medical domain. These societal issues may take many forms, but they all entail the design of models from a human-centred perspective, incorporating human-relevant requirements and constraints. In this brief paper, we discuss a number of specific issues affecting the use of AI and ML in medicine, such as fairness, privacy and anonymity, explainability and interpretability, but also some broader societal issues, such as ethics and legislation. We reckon that all of these are relevant aspects to consider in order to achieve the objective of fostering acceptance of AI- and ML-based technologies, as well as to comply with an evolving legislation concerning the impact of digital technologies on ethically and privacy sensitive matters. Our specific goal here is to reflect on how all these topics affect medical applications of AI and ML. This paper includes some of the contents of the “2nd Meeting of Science and Dialysis: Artificial Intelligence,” organized in the Bellvitge University Hospital, Barcelona, Spain.Peer ReviewedPostprint (author's final draft

    Societal issues concerning the application of artificial intelligence in medicine

    Get PDF
    Medicine is becoming an increasingly data-centred discipline and, beyond classical statistical approaches, artificial intelligence (AI) and, in particular, machine learning (ML) are attracting much interest for the analysis of medical data. It has been argued that AI is experiencing a fast process of commodification. This characterization correctly reflects the current process of industrialization of AI and its reach into society. Therefore, societal issues related to the use of AI and ML should not be ignored any longer and certainly not in the medical domain. These societal issues may take many forms, but they all entail the design of models from a human-centred perspective, incorporating human-relevant requirements and constraints. In this brief paper, we discuss a number of specific issues affecting the use of AI and ML in medicine, such as fairness, privacy and anonymity, explainability and interpretability, but also some broader societal issues, such as ethics and legislation. We reckon that all of these are relevant aspects to consider in order to achieve the objective of fostering acceptance of AI- and ML-based technologies, as well as to comply with an evolving legislation concerning the impact of digital technologies on ethically and privacy sensitive matters. Our specific goal here is to reflect on how all these topics affect medical applications of AI and ML. This paper includes some of the contents of the “2nd Meeting of Science and Dialysis: Artificial Intelligence,” organized in the Bellvitge University Hospital, Barcelona, Spain.Peer ReviewedPostprint (author's final draft

    Societal issues in machine learning: when learning from data is not enough

    Get PDF
    It has been argued that Artificial Intelligence (AI) is experiencing a fast process of commodification. Such characterization is on the interest of big IT companies, but it correctly reflects the current industrialization of AI. This phenomenon means that AI systems and products are reaching the society at large and, therefore, that societal issues related to the use of AI and Machine Learning (ML) cannot be ignored any longer. Designing ML models from this human-centered perspective means incorporating human-relevant requirements such as safety, fairness, privacy, and interpretability, but also considering broad societal issues such as ethics and legislation. These are essential aspects to foster the acceptance of ML-based technologies, as well as to ensure compliance with an evolving legislation concerning the impact of digital technologies on ethically and privacy sensitive matters. The ESANN special session for which this tutorial acts as an introduction aims to showcase the state of the art on these increasingly relevant topics among ML theoreticians and practitioners. For this purpose, we welcomed both solid contributions and preliminary relevant results showing the potential, the limitations and the challenges of new ideas, as well as refinements, or hybridizations among the different fields of research, ML and related approaches in facing real-world problems involving societal issues.Peer ReviewedPostprint (published version

    Realising the right to data portability for the domestic Internet of Things

    Get PDF
    There is an increasing role for the IT design community to play in regulation of emerging IT. Article 25 of the EU General Data Protection Regulation (GDPR) 2016 puts this on a strict legal basis by establishing the need for information privacy by design and default (PbD) for personal data-driven technologies. Against this backdrop, we examine legal, commercial and technical perspectives around the newly created legal right to data portability (RTDP) in GDPR. We are motivated by a pressing need to address regulatory challenges stemming from the Internet of Things (IoT). We need to find channels to support the protection of these new legal rights for users in practice. In Part I we introduce the internet of things and information PbD in more detail. We briefly consider regulatory challenges posed by the IoT and the nature and practical challenges surrounding the regulatory response of information privacy by design. In Part II, we look in depth at the legal nature of the RTDP, determining what it requires from IT designers in practice but also limitations on the right and how it relates to IoT. In Part III we focus on technical approaches that can support the realisation of the right. We consider the state of the art in data management architectures, tools and platforms that can provide portability, increased transparency and user control over the data flows. In Part IV, we bring our perspectives together to reflect on the technical, legal and business barriers and opportunities that will shape the implementation of the RTDP in practice, and how the relationships may shape emerging IoT innovation and business models. We finish with brief conclusions about the future for the RTDP and PbD in the IoT

    Virtual assistants in customer interface

    Get PDF
    This thesis covers use of virtual assistants from a user organization’s perspective, exploring challenges and opportunities related to introducing virtual assistants to an organization’s customer interface. Research related to virtual assistants is spread over many distinct fields of research spanning several decades. However, widespread use of virtual assistants in organizations customer interface is a relatively new and constantly evolving phenomenon. Scientific research is lacking when it comes to current use of virtual assistants and user organization’s considerations related to it. A qualitative, semi-systematic literature review method is used to analyse progression of research related to virtual assistants, aiming to identify major trends. Several fields of research that cover virtual assistants from different perspectives are explored, focusing primarily on Human-Computer Interaction and Natural Language Processing. Additionally, a case study of a Finnish insurance company’s use of virtual assistants supports the literature review and helps understand the user organization’s perspective. This thesis describes how key technologies have progressed, gives insight on current issues that affect organizations and points out opportunities related to virtual assistants in the future. Interviews related to the case study give a limited understanding as to what challenges are currently at the forefront when it comes to using this new technology in the insurance industry. The case study and literature review clearly point out that use of virtual assistants is hindered my various practical challenges. Some practical challenges related to making a virtual assistant useful for an organization seem to be industry-specific, for example issues related to giving advice about insurance products. Other challenges are more general, for example unreliability of customer feedback. Different customer segments have different attitudes towards interacting with virtual assistants, from positive to negative, making the technology a clearly polarizing issue. However, customers in general seem to be becoming more accepting towards the technology in the long term. More research is needed to understand future potential of virtual assistants in customer interactions and customer relationship management.Tämä tutkielma tutkii virtuaaliassistenttien käyttöä käyttäjäorganisaation perspektiivistä, antaen käsityksen mitä haasteita ja mahdollisuuksia liittyy virtuaaliassistenttien käyttöönottoon organisaation asiakasrajapinnassa. Virtuaaliassistentteihin liittyvä tutkimus jakautuu monien eri tutkimusalojen alaisuuteen ja useiden vuosikymmenien ajalle. Laajamittainen virtuaaliassistenttien käyttö asiakasrajapinnassa on kuitenkin verrattain uusi ja jatkuvasti kehittyvä ilmiö. Tieteellinen tutkimus joka liittyy virtuaaliassistenttien nykyiseen käyttöön ja käyttäjäorganisaation huomioon otetaviin asioihin on puutteellista. Tämä tutkielma käyttää kvalitatiivista, puolisystemaattista kirjallisuusanalyysimetodia tutkiakseen virtuaaliassistentteihin liittyviä kehityskulkuja, tarkoituksena tunnistaa merkittäviä trendejä. Tutkimus kattaa useita tutkimusaloja jotka käsittelevät virtuaaliassistentteja eri näkökulmista, keskittyen pääasiassa Human-Computer Interaction- sekä Natural Language Processing -tutkimusaloihin. Lisäksi tutkielmassa on tapaustutkimus suomalaisen vakuutusyhtiön virtuaaliassistenttien käytöstä, joka tukee kirjallisuusanalyysiä ja auttaa ymmärtämään käyttäjäorganisaation perspektiiviä. Tutkielma kuvailee kuinka keskeiset teknologiat ovat kehittyneet, auttaa ymmärtämään tämänhetkisiä ongelmia jotka koskettavat organisaatioita sekä esittelee virtuaaliassistentteihin liittyviä mahdollisuuksia tulevaisuudessa. Tapaustutkimukseen liittyvät haastattelut antavat rajoitetun kuvan kyseisen uuden teknologian käyttöön liittyvistä haasteista vakuutusalalla. Tapaustutkimus ja kirjallisuusanalyysi osoittavat että virtuaaliassistenttien käyttöönottoon liittyy erilaisia käytännön haasteita. Jotkut haasteet vaikuttavat olevan toimialakohtaisia, liittyen esimerkiksi vakuutustuotteita koskeviin neuvoihin. Toiset haasteet taas ovat yleisempiä, liittyen esimerkiksi asiakaspalautteen epäluotettavuuteen. Eri asiakassegmenteillä on erilaisia asenteita virtuaaliassistentteja kohtaan, vaihdellen positiivisesta negatiiviseen, joten kyseinen teknologia on selvästi polarisoiva aihe. Pitkällä aikavälillä asiakkaiden asenteet teknologiaa kohtaan vaikuttavat kuitenkin muuttuvan hyväksyvämpään suuntaan. Lisää tutkimusta tarvitaan jotta voidaan ymmärtää virtuaaliassistenttien tulevaisuuden potentiaalia asiakaskohtaamisissa ja asiakkuudenhallinnassa

    Privacy Labelling and the Story of Princess Privacy and the Seven Helpers

    Full text link
    Privacy is currently in 'distress' and in need of 'rescue', much like princesses in the all-familiar fairytales. We employ storytelling and metaphors from fairytales to make reader-friendly and streamline our arguments about how a complex concept of Privacy Labeling (the 'knight in shining armour') can be a solution to the current state of Privacy (the 'princess in distress'). We give a precise definition of Privacy Labeling (PL), painting a panoptic portrait from seven different perspectives (the 'seven helpers'): Business, Legal, Regulatory, Usability and Human Factors, Educative, Technological, and Multidisciplinary. We describe a common vision, proposing several important 'traits of character' of PL as well as identifying 'undeveloped potentialities', i.e., open problems on which the community can focus. More specifically, this position paper identifies the stakeholders of the PL and their needs with regard to privacy, describing how PL should be and look like in order to address these needs. Throughout the paper, we highlight goals, characteristics, open problems, and starting points for creating, what we define as, the ideal PL. In the end we present three approaches to establish and manage PL, through: self-evaluations, certifications, or community endeavors. Based on these, we sketch a roadmap for future developments.Comment: 26 pages, 3 figure

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK

    Get PDF
    Public attention towards explainability of artificial intelligence (AI) systems has been rising in recent years to offer methodologies for human oversight. This has translated into the proliferation of research outputs, such as from Explainable AI, to enhance transparency and control for system debugging and monitoring, and intelligibility of system process and output for user services. Yet, such outputs are difficult to adopt on a practical level due to a lack of a common regulatory baseline, and the contextual nature of explanations. Governmental policies are now attempting to tackle such exigence, however it remains unclear to what extent published communications, regulations, and standards adopt an informed perspective to support research, industry, and civil interests. In this study, we perform the first thematic and gap analysis of this plethora of policies and standards on explainability in the EU, US, and UK. Through a rigorous survey of policy documents, we first contribute an overview of governmental regulatory trajectories within AI explainability and its sociotechnical impacts. We find that policies are often informed by coarse notions and requirements for explanations. This might be due to the willingness to conciliate explanations foremost as a risk management tool for AI oversight, but also due to the lack of a consensus on what constitutes a valid algorithmic explanation, and how feasible the implementation and deployment of such explanations are across stakeholders of an organization. Informed by AI explainability research, we then conduct a gap analysis of existing policies, which leads us to formulate a set of recommendations on how to address explainability in regulations for AI systems, especially discussing the definition, feasibility, and usability of explanations, as well as allocating accountability to explanation providers
    corecore