91,241 research outputs found
Empowerment or Engagement? Digital Health Technologies for Mental Healthcare
We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare
Philosophy and theory of artificial intelligence 2017
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing
Ethics in AI through the Developer's View: A Grounded Theory Literature Review
The term ethics is widely used, explored, and debated in the context of
developing Artificial Intelligence (AI) based software systems. In recent
years, numerous incidents have raised the profile of ethical issues in AI
development and led to public concerns about the proliferation of AI technology
in our everyday lives. But what do we know about the views and experiences of
those who develop these systems: the AI developers? We conducted a grounded
theory literature review (GTLR) of 38 primary empirical studies that included
AI developers' views on ethics in AI and analysed them to derive five
categories - developer awareness, perception, need, challenge, and approach.
These are underpinned by multiple codes and concepts that we explain with
evidence from the included studies. We present a taxonomy of ethics in AI from
developers' viewpoints to assist AI developers in identifying and understanding
the different aspects of AI ethics. The taxonomy provides a landscape view of
the key aspects that concern AI developers when it comes to ethics in AI. We
also share an agenda for future research studies and recommendations for
developers, managers, and organisations to help in their efforts to better
consider and implement ethics in AI.Comment: 40 pages, 5 figures, 4 table
Ethics of Artificial Intelligence Demarcations
In this paper we present a set of key demarcations, particularly important
when discussing ethical and societal issues of current AI research and
applications. Properly distinguishing issues and concerns related to Artificial
General Intelligence and weak AI, between symbolic and connectionist AI, AI
methods, data and applications are prerequisites for an informed debate. Such
demarcations would not only facilitate much-needed discussions on ethics on
current AI technologies and research. In addition sufficiently establishing
such demarcations would also enhance knowledge-sharing and support rigor in
interdisciplinary research between technical and social sciences.Comment: Proceedings of the Norwegian AI Symposium 2019 (NAIS 2019),
Trondheim, Norwa
Challenges for an Ontology of Artificial Intelligence
Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be assimilated and regarded as “normal,” and (3) the tendency of human beings to anthropomorphize. This list is not intended as exhaustive, nor is it seen to preclude entirely a clear ontology, however, these challenges are a necessary set of topics for consideration. Each of these factors is seen to present a 'moving target' for discussion, which poses a challenge for both technical specialists and non-practitioners of AI systems development (e.g., philosophers and theologians) to speak meaningfully given that the corpus of AI structures and capabilities evolves at a rapid pace. Finally, we present avenues for moving forward, including opportunities for collaborative synthesis for scholars in philosophy and science
A Case for Machine Ethics in Modeling Human-Level Intelligent Agents
This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts
Recommended from our members
Do Engineering Students Learn Ethics From an Ethics Course?
The goal of the present research is to develop machine-assisted methods that can assist in the analysis of students’ written compositions in ethics courses. As part of this research, we analyzed Social Impact Assessment (SIA) papers submitted by engineering undergraduates in a course on engineering ethics. The SIA papers required students to identify and discuss a contemporary engineering technology (e.g., autonomous tractor trailers) and to explicitly discuss the ethical issues involved in that technology. Here we describe the ability of three machine tools to discriminate differences in the technical compared to ethical portions of the SIA papers. First, using LIWC (Language Inquiry and Word Count) we quantified differences in analytical thinking, expertise and self-confidence, disclosure, and affect, in the technical and ethical portions of the papers. Next, we applied MEH (Meaning Extraction Helper) to examine differences in critical concepts in the technical and ethical portions of the papers. Finally, we used LDA (Latent Dirichlet Allocation) to examine differences in the topics in the technical and ethical portions of the papers. The results of these three tests demonstrate the ability of machine-based tools to discriminate conceptual, affective, and motivational differences in the texts that students compose that relate to engineering technology and to engineering ethics. We discuss the utility and future directions for this research.Cockrell School of Engineerin
Autonomous Systems as Legal Agents: Directly by the Recognition of Personhood or Indirectly by the Alchemy of Algorithmic Entities
The clinical manifestations of platelet dense (δ) granule defects are easy bruising, as well as epistaxis and bleeding after delivery, tooth extractions and surgical procedures. The observed symptoms may be explained either by a decreased number of granules or by a defect in the uptake/release of granule contents. We have developed a method to study platelet dense granule storage and release. The uptake of the fluorescent marker, mepacrine, into the platelet dense granule was measured using flow cytometry. The platelet population was identified by the size and binding of a phycoerythrin-conjugated antibody against GPIb. Cells within the discrimination frame were analysed for green (mepacrine) fluorescence. Both resting platelets and platelets previously stimulated with collagen and the thrombin receptor agonist peptide SFLLRN was analysed for mepacrine uptake. By subtracting the value for mepacrine uptake after stimulation from the value for uptake without stimulation for each individual, the platelet dense granule release capacity could be estimated. Whole blood samples from 22 healthy individuals were analysed. Mepacrine incubation without previous stimulation gave mean fluorescence intensity (MFI) values of 83±6 (mean ± 1 SD, range 69–91). The difference in MFI between resting and stimulated platelets was 28±7 (range 17–40). Six members of a family, of whom one had a known δ-storage pool disease, were analysed. The two members (mother and son) who had prolonged bleeding times also had MFI values disparate from the normal population in this analysis. The values of one daughter with mild bleeding problems but a normal bleeding time were in the lower part of the reference interval
- …