769 research outputs found

    Embryonic Stem Cell Research as an Ethical Issue: On the Emptiness of Symbolic Value

    Get PDF
    The debate over human embryonic stem cell research-scientific and clinical prospects as well as ethical implications-became front-page news only after two teams of university researchers reported in November 1998 that they had isolated and cultured human pluripotent stem cells. The discovery caused a flurry of excitement among patients and researchers and drew attention from President Clinton, who instructed the National Bioethics Advisory Commission (NBAC) to conduct a thorough review of the issues associated with. .. human stem cell research, balancing all medical and ethical issues.

    Comparative Philosophies in Intercultural Information Ethics

    Get PDF
    The following review explores Intercultural Information Ethics in terms of comparative philosophy, supporting IIE as the most relevant and significant development of the field of Information Ethics. The focus of the review is threefold. First, it will review the core presumption of the field of IIE, that being the demand for an intermission in the pursuit of a founding philosophy for IE in order to first address the philosophical biases of IE by western philosophy. Second, a history of the various philosophical streams of IIE will be outlined, including its literature and pioneering contributors. Lastly, a new synthesis of comparative philosophies in IIE will be offered, looking towards a future evolution of the field. Examining the interchange between contemporary information ethicists regarding the discipline of IIE, the review first outlines the previously established presumptions of the field of IIE that posit the need for an IE as grounded in western sensibilities. The author then addresses the implications of the foregoing presumption from several non-western viewpoints, arguing that IIE does in fact find roots in non-western philosophies as established in the concluding synthesis of western and eastern philosophical traditions

    AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

    Full text link
    Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for "Good". This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of "AI for Social Good", more specifically "Data Science for Social Good". Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: "attacks" as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on 27-10-201

    Judging Myopia in Hindsight: Bivens Actions, National Security Decisions, and the Rule of Law

    Get PDF
    Liability in national security matters hinges on curbing both official myopia and hindsight bias. The Framers knew that officials could be short-sighted, prioritizing expedience over abiding values. Judicial review emerged as an antidote to myopia of this kind. However, the Framers recognized that ubiquitous second-guessing of government decisions would also breed instability. Balancing these conflicting impulses has produced judicial oscillation between intervention and deference. Recent decisions on Bivens claims in the war on terror have defined extremes of deference or intervention. Cases like Ashcroft v. Iqbal and Arar v. Ashcroft display a categorical deference that rewards officials\u27 myopia. On the other hand, courts in Padilla v. Yoo and al-Kidd v. Ashcroft manifest an equally categorical interventionism that institutionalizes hindsight bias. To break with the categorical cast of both deferential and interventionist decisions, this Article proposes an innovation-eliciting approach. Inspired by remedies for cognitive bias and regulatory failure, it gives officials a stake in developing alternatives to both overreaching and abdication. Officials who can demonstrate they have implemented alternatives in other contexts that are both proportional and proximate in time to the instant case buy flexibility and dismissal of the lawsuit before the qualified immunity phase. By leveraging officials\u27 experiences and expertise, the innovation-eliciting approach tames the pendular swings in policy that Justice Kennedy in Boumediene v. Bush viewed as undermining both liberty and security

    Ethics in Alternative Dispute Resolution: New Issues, No Answers from the Adversary Conception of Lawyers’ Responsibilities

    Get PDF
    The romantic days of ADR appear to be over. To the extent that proponents of ADR, like myself, were attracted to it because of its promise of flexibility, adaptability, and creativity, we now see the need for ethics, standards of practice and rules as potentially limiting and containing the promise of alternatives to rigid adversarial modes of dispute resolution. It is almost as if we thought that anyone who would engage in ADR must of necessity be a moral, good, creative, and, of course, ethical person. That we are here today is deeply ironic and yet, also necessary, as appropriate dispute resolution struggles to define itself and insure its legitimacy against a variety of theoretical and practical challenges

    Regulation of Television advertising

    Get PDF
    Regulation of television advertising typically covers both the time devoted to commercials and restrictions on the commodities or services that can be publicized to various audiences (stricter laws often apply to children’s programming). Time restrictions (advertising caps) may improve welfare when advertising is overprovided in the market system. Even then, such caps may reduce the diversity of programming by curtailing revenues from programs. They may also decrease program net quality (including the direct benefit to viewers). Restricting advertising of particular products (such as cigarettes) likely reflects paternalistic altruism, but restrictions may be less efficient than appropriate taxes.television, advertising, regulation, length caps, advertising content

    From Computer Ethics and the Ethics of AI towards an Ethics of Digital Ecosystems

    Get PDF
    open access articleEthical, social and human rights aspects of computing technologies have been discussed since the inception of these technologies. In the 1980s this led to the development of a discourse often referred to as computer ethics. More recently, since the middle of the 2010s, a highly visible discourse on the ethics of artificial intelligence (AI) has developed. This paper discusses the relationship between these two discourses and compares their scopes, the topics and issues they cover, their theoretical basis and reference disciplines, the solutions and mitigations options they propose and their societal impact. The paper argues that an understanding of the similarities and differences of the discourses can benefit the respective discourses individually. More importantly, by reviewing them, one can draw conclusions about relevant features of the next discourse, the one we can reasonably expect to follow after the ethics of AI. The paper suggests that instead of focusing on a technical artefact such as computers or AI, one should focus on the fact that ethical and related issues arise in the context of socio-technical systems. Drawing on the metaphor of ecosystems which is widely applied to digital technologies, it suggests preparing for a discussion of the ethics of digital ecosystems. Such a discussion can build on and benefit from a more detailed understanding of its predecessors in computer ethics and the ethics of AI

    To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

    Get PDF
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.publishedVersio

    Artificial moral experts: asking for ethical advice to artificial intelligent assistants

    Get PDF
    In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this article, we begin by arguing that the objections that have tried to deny the existence (and convenience) of moral expertise are unsatisfactory. After that, we show that people have ethical reasons to ask for a piece of moral advice in daily life situations. Then, we argue that some Artificial Intelligence (AI) systems can play an increasing role in human morality by becoming moral experts. Some AI-based moral assistants can qualify as artificial moral experts and we would have good ethical reasons to use them.This article is part of the research project EthAI+3 (Digital Ethics. Moral Enhancement through an Interactive Use of Artificial Intelligence), funded by the State Research Agency of the Spanish Government (PID2019-104943RB-I00) and the project SOCRAI3 (Moral Enhancement and Artificial Intelligence. Ethical aspects of a virtual Socratic assistant), funded by FEDER Junta de Andalucía (B-HUM-64-UGR20). Jon Rueda thanks the funding of an INPhINIT Retaining Fellowship of the La Caixa Foundation (Grant number LCF/BQ/DR20/11790005)
    • 

    corecore