187 research outputs found

    Artificial Intelligence and Public Trust

    Get PDF
    The future is here. With the exploding commercial market for high-powered, cloud-computing AI services provided by the likes of Amazon, Microsoft, and Google, the reach of artificial intelligence technologies is virtually unlimited. What does this mean for humans? How will we adapt to a world in which we increasingly find ourselves in economic, creative, and cognitive competition with machines? Will we embrace these new technologies with the same fervor as we embraced televisions and smartphones? Will we trust them? Should we trust them

    The Future of Military Virtue: Autonomous Systems and the Moral Deskilling of the Military

    Get PDF
    Autonomous systems, including unmanned aerial vehicles (UAVs), anti-munitions systems, armed robots, cyber attack and cyber defense systems, are projected to become the centerpiece of 21st century military and counter-terrorism operations. This trend has challenged legal experts, policymakers and military ethicists to make sense of these developments within existing normative frameworks of international law and just war theory. This paper highlights a different yet equally profound ethical challenge: understanding how this trend may lead to a moral deskilling of the military profession, potentially destabilizing traditional norms of military virtue and their power to motivate ethical restraint in the conduct of war. Employing the normative framework of virtue ethics, I argue that professional ideals of military virtue such as courage, integrity, honor and compassion help to distinguish legitimate uses of military force from amoral, criminal or mercenary violence, while also preserving the conception of moral community needed to secure a meaningful peace in war’s aftermath. The cultivation of these virtues in a human being, however, presupposes repeated practice and development of skills of moral analysis, deliberation and action, especially in the ethical use of force. As in the historical deskilling of other professions, human practices critical to cultivating these skills can be made redundant by autonomous or semi-autonomous machines, with a resulting devaluation and/or loss of these skills and the virtues they facilitate. This paper explores the circumstances under which automated methods of warfare, including automated weapons and cyber systems, could lead to a dangerous ‘moral deskilling’ of the military profession. I point out that this deskilling remains a significant risk even with a commitment to ‘human on the loop’ protocols. I conclude by summarizing the potentially deleterious consequences of such an outcome, and reflecting on possible strategies for its prevention

    Introduction: Envisioning the good life In the 21st century and beyond

    Get PDF
    In May 2014 cosmologist Stephen Hawking, computer scientist Stuart Russell, and physicists Max Tegmark and Frank Wilczek published an open letter in the UK news outlet The Independent, sounding the alarm about the grave risks to humanity posed by emerging technologies of artificial intelligence. They invited readers to imagine these technologies outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. The authors note that while the successful creation of artificial intelligence (AI) has the potential to bring huge benefits to our world, and would undoubtedly be the biggest event in human history ... it might also be the last. Hawking echoed the warning later that year, telling the BBC that unrestricted AI development could spell the end of the human race. While some AI enthusiasts dismiss such warnings as fearmongering hype, celebrated high-tech inventors Elon Musk, Steve Wozniak, Bill Gates, and thousands of AI and robotics researchers have joined the chorus of voices calling for wiser and more effective human oversight of these new technologies. How worried should we be? More importantly: what should we do? AI is only one of many emerging technologies-from genome editing and 3D printing to a globally networked Internet of Things -shaping a future unparalleled in human history in its promise and its peril. Are we up to the challenge this future presents? If not, how can we get there? How can htmlans hope to live well in a world made increasingly more complex and unpredictable by emerging technologies? Though it will require the remainder of the book to fully respond to that question, in essence my answer is this: we need to cultivate in ow-selves, collectively, a special kind of moral character, one that expresses what I will call the technomoral virtues

    Why Obesity is not a Disability under Tennessee Law aand How the Legislature can Address the Obesity Epidemic

    Get PDF
    Bigger is better. This old adage rings true for paychecks and televisions but not pant size. Now, some lawmakers and courts seek to protect obesity under disability law. Obesity currently plagues 35.7% of Americans and 29.2% of Tennesseans, and it is growing at epidemic rates. However, the bigger-is-better argument rings false in this instance considering obesity\u27s severe complications and side effects. In the same vein, more people are also considering the consequences of obesity in the workforce, in health care, and in the medical profession. Indeed, tackling the issue of obesity demands sympathy because of the stigma and stereotypes associated with the condition, including the thoughts that obese people are lazy, unintelligent, or lacking in self-respect. In a society that is highly focused on appearance, the outlook for combating these stereotypes seems gloomy. However, while compassion is a must, legal protection under disability law is not

    Artificial Intelligence and the Ethics of Self-learning Robots

    Get PDF
    The convergence of robotics technology with the science of artificial intelligence ( or AI) is rapidly enabling the development of robots that emulate a wide range of intelligent human behaviors.1 Recent advances in machine learning techniques have produced significant gains in the ability of artificial agents to perform or even excel in activities formerly thought to be the exclusive province of human intelligence, including abstract problem-solving, perceptual recognition, social interaction, and natural language use. These developments raise a host of new ethical concerns about the responsible design, manufacture, and use of robots enabled with artificial intelligence-particularly those equipped with self-learning capacities. The potential public benefits of self-learning robots are immense. Driverless cars promise to vastly reduce human fatalities on the road while boosting transportation efficiency and reducing energy use. Robot medics with access to a virtual ocean of medical case data might one day be able to diagnose patients with far greater speed and reliability than even the best-trained human counterparts. Robots tasked with crowd control could predict the actions of a dangerous mob well before the signs are recognizable to law enforcement officers. Such applications, and many more that will emerge, have the potential to serve vital moral interests in protecting human life, health, and well-being. Yet as this chapter will show, the ethical risks posed by AI-enabled robots are equally serious-especially since self-learning systems behave in ways that cannot always be anticipated or folly understood, even by their programmers. Some warn of a future where Al escapes our control, or even turns against humanity (Standage 2016); but other, far less cinematic dangers are much nearer to hand and are virtually certain to cause great harms if not promptly addressed by technologists, lawmakers, and ocher stakeholders. The task of ensuring the ethical design, manufacture, use, and governance of AI-enabled robots and other artificial agents is thus as critically important as it is vast

    Artificial moral advisors:A new perspective from moral psychology

    Get PDF

    Super Soldiers: The Ethical, Legal and Operational Implications (Part 2)

    Get PDF
    This is the second chapter of two on military human enhancement. In the first chapter, the authors outlined past and present efforts aimed at enhancing the minds and bodies of our warfighters with the broader goal of creating the “super soldiers” of tomorrow, all before exploring a number of distinctions—natural vs. artificial, external vs. internal, enhancement vs. therapy, enhancement vs. disenhancement, and enhancement vs. engineering—that are critical to the definition of military human enhancement and understanding the problems it poses. The chapter then advanced a working definition of enhancement as efforts that aim to “improve performance, appearance, or capability besides what is necessary to achieve, sustain, or restore health.” It then discussed a number of variables that must be taken into consideration when applying this definition in a military context. In this second chapter, drawing on that definition and some of the controversies already mentioned, the authors set out the relevant ethical, legal, and operational challenges posed by military enhancement. They begin by considering some of the implications for international humanitarian law and then shift to US domestic law. Following that, the authors examine military human enhancement from a virtue ethics approach, and finally outline some potential consequences for military operations more generally

    The ethics of digital well-being: a multidisciplinary perspective

    Get PDF
    This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being
    • 

    corecore