12 research outputs found

    A Case for Competent AI Systems −- A Concept Note

    Full text link
    The efficiency of an AI system is contingent upon its ability to align with the specified requirements of a given task. How-ever, the inherent complexity of tasks often introduces the potential for harmful implications or adverse actions. This note explores the critical concept of capability within AI systems, representing what the system is expected to deliver. The articulation of capability involves specifying well-defined out-comes. Yet, the achievement of this capability may be hindered by deficiencies in implementation and testing, reflecting a gap in the system's competency (what it can do vs. what it does successfully). A central challenge arises in elucidating the competency of an AI system to execute tasks effectively. The exploration of system competency in AI remains in its early stages, occasionally manifesting as confidence intervals denoting the probability of success. Trust in an AI system hinges on the explicit modeling and detailed specification of its competency, connected intricately to the system's capability. This note explores this gap by proposing a framework for articulating the competency of AI systems. Motivated by practical scenarios such as the Glass Door problem, where an individual inadvertently encounters a glass obstacle due to a failure in their competency, this research underscores the imperative of delving into competency dynamics. Bridging the gap between capability and competency at a detailed level, this note contributes to advancing the discourse on bolstering the reliability of AI systems in real-world applications

    Moral Competence and Moral Orientation in Robots

    Get PDF
    Two major strategies (the top-down and bottom-up strategies) are currently discussed in robot ethics for moral integration. I will argue that both strategies are not sufficient. Instead, I agree with Bertram F. Malle and Matthias Scheutz that robots need to be equipped with moral competence if we don’t want them to be a potential risk in society, causing harm, social problems or conflicts. However, I claim that we should not define moral competence merely as a result of different “elements” or “components” we can randomly change. My suggestion is to follow Georg Lind’s dual aspect dual layer theory of moral self that provides a broader perspective and another vocabulary for the discussion in robot ethics. According to Lind, moral competence is only one aspect of moral behavior that we cannot separate from its second aspect: moral orientation. As a result, the thesis of this paper is that integrating morality into robots has to include moral orientation and moral competence

    A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government

    Get PDF
    Artificial intelligence (AI) has proven to be superior to human decision-making in certain areas. This is partic-ularly the case whenever there is a need for advanced strategic reasoning and analysis of vast amounts of data in order to solve complex problems. Few human activities fit this description better than politics. In politics we deal with some of the most complex issues humans face, short-term and long-term consequences have to be balanced, and we make decisions knowing that we do not fully understand their consequences. I examine an extreme case of the application of AI in the domain of government, and use this case to examine a subset of the potential harms associated with algorithmic governance. I focus on five objections based on political theoretical considerations and the potential political harms of an AI technocracy. These are objections based on the ideas of ‘political man’ and participation as a prerequisite for legitimacy, the non-morality of machines and the value of transparency and accountability. I conclude that these objections do not successfully derail AI technocracy, if we make sure that mechanisms for control and backup are in place, and if we design a system in which humans have control over the direction and fundamental goals of society. Such a technocracy, if the AI capabilities of policy formation here assumed becomes reality, may, in theory, provide us with better means of participation, legitimacy, and more efficient government.publishedVersio

    Autonomous Vehicles and Ethical Settings: Who Should Decide?

    Get PDF
    While autonomous vehicles (AVs) are not designed to harm people, harming people is an inevitable by-product of their operation. How are AVs to deal ethically with situations where harming people is inevitable? Rather than focus on the much-discussed question of what choices AVs should make, we can also ask the much less discussed question of who gets to decide what AVs should do in such cases. Here there are two key options: AVs with a personal ethics setting (PES) or an “ethical knob” that end users can control or AVs with a mandatory ethics setting (MES) that end users cannot control. Which option, a PES or an MES, is best and why? This chapter argues, by drawing on the choice architecture literature, in favor of a hybrid view that requires mandated default choice settings while allowing for limited end user control

    A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

    Get PDF
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artifcial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specifc ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may beneft their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fll this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient

    The Ghost in the Machine: Being Human in the Age of AI and Machine Learning

    Get PDF
    Human beings have used technology to improve their efficiency throughout history. We continue to do so today, but we are no longer only using technology to perform physical tasks. Today, we make computers that are smart enough to challenge, and even surpass, us in many areas. Artificial intelligence—embodied or not—now drive our cars, trade stocks, socialise with our children, keep the elderly company and the lonely warm. At the same time, we use technology to gather vast amounts of data on ourselves. This, in turn, we use to train intelligent computers that ease and customise ever more of our lives. The change that occurs in our relations to other people, and computers, change both how we act and how we are. What sort of challenges does this development pose for human beings? I argue that we are seeing an emerging challenge to the concept of what it means to be human, as (a) we struggle to define what makes us special and try to come to terms with being surpassed in various ways by computers, and (b) the way we use and interact with technology changes us in ways we do not yet fully understand.publishedVersio

    The Role of Accounts and Apologies in Mitigating Blame toward Human and Machine Agents

    Get PDF
    Would you trust a machine to make life-or-death decisions about your health and safety? Machines today are capable of achieving much more than they could 30 years ago—and the same will be said for machines that exist 30 years from now. The rise of intelligence in machines has resulted in humans entrusting them with ever-increasing responsibility. With this has arisen the question of whether machines should be given equal responsibility to humans—or if humans will ever perceive machines as being accountable for such responsibility. For example, if an intelligent machine accidentally harms a person, should it be blamed for its mistake? Should it be trusted to continue interacting with humans? Furthermore, how does the assignment of moral blame and trustworthiness toward machines compare to such assignment to humans who harm others? I answer these questions by exploring differences in moral blame and trustworthiness attributed to human and machine agents who make harmful moral mistakes. Additionally, I examine whether the knowledge and type of reason, as well as apology, for the harmful incident affects perceptions of the parties involved. In order to fill the gaps in understanding between topics in moral psychology, cognitive psychology, and artificial intelligence, valuable information from each of these fields have been combined to guide the research study being presented herein

    Between Fear and Trust: Factors Influencing Older Adults' Evaluation of Socially Assistive Robots

    Full text link
    Socially Assistive Robots (SARs) are expected to support autonomy, aging in place, and wellbeing in later life. For successful assimilation, it is necessary to understand factors affecting older adults Quality Evaluations (QEs) of SARs, including the pragmatic and hedonic evaluations and overall attractiveness. Previous studies showed that trust in robots significantly enhances QE, while technophobia considerably decreases it. The current study aimed to examine the relative impact of these two factors on older persons QE of SARs. The study was based on an online survey of 384 individuals aged 65 and above. Respondents were presented with a video of a robotic system for physical and cognitive training and filled out a questionnaire relating to that system. The results indicated a positive association between trust and QE and a negative association between technophobia and QE. A simultaneous exploration demonstrated that the relative impact of technophobia is significantly more substantial than that of trust. In addition, the pragmatic qualities of the robot were found to be more crucial to its QE than the social aspects of use. The findings suggest that implementing robotics technology in later life strongly depends on reducing older adults technophobia regarding the convenience of using SARs and highlight the importance of simultaneous explorations of facilitators and inhibitors
    corecore