246,703 research outputs found

    Doing social network ethics: a critical, interdisciplinary approach

    Get PDF
    Purpose: This paper proposes an inter-disciplinary approach to the ethics of social networking services (SNS) that connects critical analysis with the doing of ethics in terms of both pedagogic and technological practice. Design/methodology/approach: Primarily conceptual and discursive, drawing on theoretical concepts from a broad, inter-disciplinary field. These concepts are integrated into a multi-dimensional framework that proceeds through four sequential stages; socio-economic, ethical, legal and practical/professional. Particular instances of SNS are used as illustrative examples. Findings: The evaluation of ethical issues can be enriched by broader, holistic approaches that take account of the socio-economic, technical and legal contexts in which SNS technologies are designed, deployed and used. Inter-disciplinary approaches have the potential to generate new connections and possibilities for both the teaching and the professional practice of ethics. Practical implications: Applied ethics are used to consider practical solutions that explore regulatory measures and envision alternative models of social networking. The approach proposed has practical value for teachers and students of computer ethics, as well as for IT practitioners. Originality/value: This paper synthesises elements from media, communication and cultural studies, science and technology, information systems and computer science. The paper offers a strategy of inquiry to understand various aspects of SNS ethics—legal, socio-economic and technical. It presents a methodology for thinking about and doing ethics which can be used by IT practitioners

    Doing social network ethics: a critical, interdisciplinary approach

    Get PDF
    Purpose: This paper proposes an inter-disciplinary approach to the ethics of social networking services (SNS) that connects critical analysis with the doing of ethics in terms of both pedagogic and technological practice. Design/methodology/approach: Primarily conceptual and discursive, drawing on theoretical concepts from a broad, inter-disciplinary field. These concepts are integrated into a multi-dimensional framework that proceeds through four sequential stages; socio-economic, ethical, legal and practical/professional. Particular instances of SNS are used as illustrative examples. Findings: The evaluation of ethical issues can be enriched by broader, holistic approaches that take account of the socio-economic, technical and legal contexts in which SNS technologies are designed, deployed and used. Inter-disciplinary approaches have the potential to generate new connections and possibilities for both the teaching and the professional practice of ethics. Practical implications: Applied ethics are used to consider practical solutions that explore regulatory measures and envision alternative models of social networking. The approach proposed has practical value for teachers and students of computer ethics, as well as for IT practitioners. Originality/value: This paper synthesises elements from media, communication and cultural studies, science and technology, information systems and computer science. The paper offers a strategy of inquiry to understand various aspects of SNS ethics—legal, socio-economic and technical. It presents a methodology for thinking about and doing ethics which can be used by IT practitioners

    Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns

    Get PDF
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.Horizon 2020(H2020)707404Article / Letter to editorInstituut voor Metajuridic

    Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns

    Get PDF
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.Horizon 2020(H2020)707404Article / Letter to editorInstituut voor Metajuridic

    Managing the Ethical Dimensions of Brain-Computer Interfaces in eHealth: An SDLC-based Approach

    Get PDF
    A growing range of brain-computer interface (BCI) technologies is being employed for purposes of therapy and human augmentation. While much thought has been given to the ethical implications of such technologies at the ‘macro’ level of social policy and ‘micro’ level of individual users, little attention has been given to the unique ethical issues that arise during the process of incorporating BCIs into eHealth ecosystems. In this text a conceptual framework is developed that enables the operators of eHealth ecosystems to manage the ethical components of such processes in a more comprehensive and systematic way than has previously been possible. The framework’s first axis defines five ethical dimensions that must be successfully addressed by eHealth ecosystems: 1) beneficence; 2) consent; 3) privacy; 4) equity; and 5) liability. The second axis describes five stages of the systems development life cycle (SDLC) process whereby new technology is incorporated into an eHealth ecosystem: 1) analysis and planning; 2) design, development, and acquisition; 3) integration and activation; 4) operation and maintenance; and 5) disposal. Known ethical issues relating to the deployment of BCIs are mapped onto this matrix in order to demonstrate how it can be employed by the managers of eHealth ecosystems as a tool for fulfilling ethical requirements established by regulatory standards or stakeholders’ expectations. Beyond its immediate application in the case of BCIs, we suggest that this framework may also be utilized beneficially when incorporating other innovative forms of information and communications technology (ICT) into eHealth ecosystems

    Online consultation on experts’ views on digital competence

    Get PDF
    The objective of this investigation was to provide another perspective on what it means to be digitally competent today, in addition to reviews of literature and current frameworks for the development of digital competence, 5 all of which constitute part of the wider IPTS Digital Competence Project (DIGCOMP). Some common ground exists at a general level in defining digital competence in terms of knowledge, skills, and attitudes, which may be hierarchically organised. However, this does not provide the clarity needed by teachers, employers, citizens – all those who are responsible for digital competence development, be it their own or other people’s ‐ to make informed decisions. Further work is needed to create a common language that helps to enhance understanding across the worlds of research, education, training, and work. This will make it easier for citizens and employers to see what digital competence entails and how it is relevant to their jobs and more generally, their lives

    Telecare:legal, ethical and socioeconomic factors

    Get PDF

    Managing the Ethical Dimensions of Brain-Computer Interfaces in eHealth: An SDLC-based Approach

    Get PDF
    A growing range of brain-computer interface (BCI) technologies is being employed for purposes of therapy and human augmentation. While much thought has been given to the ethical implications of such technologies at the ‘macro’ level of social policy and ‘micro’ level of individual users, little attention has been given to the unique ethical issues that arise during the process of incorporating BCIs into eHealth ecosystems. In this text a conceptual framework is developed that enables the operators of eHealth ecosystems to manage the ethical components of such processes in a more comprehensive and systematic way than has previously been possible. The framework’s first axis defines five ethical dimensions that must be successfully addressed by eHealth ecosystems: 1) beneficence; 2) consent; 3) privacy; 4) equity; and 5) liability. The second axis describes five stages of the systems development life cycle (SDLC) process whereby new technology is incorporated into an eHealth ecosystem: 1) analysis and planning; 2) design, development, and acquisition; 3) integration and activation; 4) operation and maintenance; and 5) disposal. Known ethical issues relating to the deployment of BCIs are mapped onto this matrix in order to demonstrate how it can be employed by the managers of eHealth ecosystems as a tool for fulfilling ethical requirements established by regulatory standards or stakeholders’ expectations. Beyond its immediate application in the case of BCIs, we suggest that this framework may also be utilized beneficially when incorporating other innovative forms of information and communications technology (ICT) into eHealth ecosystems
    • …
    corecore