200,897 research outputs found

    Beyond Research Ethics: Dialogues in Neuro-ICT Research

    Get PDF
    open access articleThe increasing use of information and communication technologies (ICTs) to help facilitate neuroscience adds a new level of complexity to the question of how ethical issues of such research can be identified and addressed. Current research ethics practice, based on ethics reviews by institutional review boards (IRB) and underpinned by ethical principlism, has been widely criticized. In this article, we develop an alternative way of approaching ethics in neuro-ICT research, based on discourse ethics, which implements Responsible Research and Innovation (RRI) through dialogues. We draw on our work in Ethics Support, using the Human Brain Project (HBP) as empirical evidence of the viability of this approach

    Motivation, Design, and Ubiquity: A Discussion of Research Ethics and Computer Science

    Full text link
    Modern society is permeated with computers, and the software that controls them can have latent, long-term, and immediate effects that reach far beyond the actual users of these systems. This places researchers in Computer Science and Software Engineering in a critical position of influence and responsibility, more than any other field because computer systems are vital research tools for other disciplines. This essay presents several key ethical concerns and responsibilities relating to research in computing. The goal is to promote awareness and discussion of ethical issues among computer science researchers. A hypothetical case study is provided, along with questions for reflection and discussion.Comment: Written as central essay for the Computer Science module of the LANGURE model curriculum in Research Ethic

    Technofixing the Future: Ethical Side Effects of Using AI and Big Data to meet the SDGs

    Get PDF
    While the use of smart information systems (the combination of AI and Big Data) offer great potential for meeting many of the UN’s Sustainable Development Goals (SDGs), they also raise a number of ethical challenges in their implementation. Through the use of six empirical case studies, this paper will examine potential ethical issues relating to use of SIS to meet the challenges in six of the SDGs (2, 3, 7, 8, 11, and 12). The paper will show that often a simple “technofix”, such as through the use of SIS, is not sufficient and may exacerbate, or create new, issues for the development community using SIS

    AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

    Full text link
    Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for "Good". This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of "AI for Social Good", more specifically "Data Science for Social Good". Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: "attacks" as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on 27-10-201

    Student compliance with ethical guidelines: the Glasgow ethics code

    Get PDF
    While disciplines like medicine and psychology have for several years followed strict procedures for ethical approval of experiments involving humans, only recently has the use of human participants within Computing Science been subject to the same scrutiny. Although we may wish to put a case forward for Computing Science to be exempt from such formal ethics procedures, funding bodies and universities typically insist that we seek the same approval as other disciplines for our experiments, including any use of human participants by our students during the course of their studies. We have introduced a simple, scalable ethics procedure for student assessment, that identifies ethical concerns, yet does not overwhelm the limited staff resources available for supporting this initiative. The process is based around a form of triage that filters the approximately 8000 assessments that are submitted annually. This paper summarises this procedure, discusses the underlying assumptions, and outlines the problems encountered

    BNCI systems as a potential assistive technology: ethical issues and participatory research in the BrainAble project

    Get PDF
    This paper highlights aspects related to current research and thinking about ethical issues in relation to Brain Computer Interface (BCI) and Brain-Neuronal Computer Interfaces (BNCI) research through the experience of one particular project, BrainAble, which is exploring and developing the potential of these technologies to enable people with complex disabilities to control computers. It describes how ethical practice has been developed both within the multidisciplinary research team and with participants. Results: The paper presents findings in which participants shared their views of the project prototypes, of the potential of BCI/BNCI systems as an assistive technology, and of their other possible applications. This draws attention to the importance of ethical practice in projects where high expectations of technologies, and representations of “ideal types” of disabled users may reinforce stereotypes or drown out participant “voices”. Conclusions: Ethical frameworks for research and development in emergent areas such as BCI/BNCI systems should be based on broad notions of a “duty of care” while being sufficiently flexible that researchers can adapt project procedures according to participant needs. They need to be frequently revisited, not only in the light of experience, but also to ensure they reflect new research findings and ever more complex and powerful technologies
    • 

    corecore