1,567 research outputs found

    From killer machines to doctrines and swarms; or why ethics of military robotics is not (necessarily) about robots.

    Get PDF
    Ethical reflections on military robotics can be enriched by a better understanding of the nature and role of these technologies and by putting robotics into context in various ways. Discussing a range of ethical questions, this paper challenges the prevalent assumptions that military robotics is about military technology as a mere means to an end, about single killer machines, and about “military” developments. It recommends that ethics of robotics attend to how military technology changes our aims, concern itself not only with individual robots but also and especially with networks and swarms, and adapt its conceptions of responsibility to the rise of such cloudy and unpredictable systems, which rely on decentralized control and buzz across many spheres of human activity

    Towards European Anticipatory Governance for Artificial Intelligence

    Get PDF
    This report presents the findings of the Interdisciplinary Research Group “Responsibility: Machine Learning and Artificial Intelligence” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Technology and Global Affairs research area of DGAP. In September 2019, they brought leading experts from research and academia together with policy makers and representatives of standardization authorities and technology organizations to set framework conditions for a European anticipatory governance regime for artificial intelligence (AI)

    Multi-layered discourse shaping military AI: the cases of Germany and the UK

    Get PDF
    Artificial intelligence (AI) is being increasingly utilized by militaries across the globe, with major powers like the USA and China leading the way. Indeed, from the perspective of various realist theories, it can be expected that all countries with sufficient resources for developing military AI capabilities will do so. However, there are instances of countries with sufficient resources not showing any substantial military AI practices, defying realist expectations. This study proposes an alternative explanation to realist theories for the differences in the scope of military AI practices by states, arguing that ideational conditions like norms, ethics, and identity are decisive rather than structural pressures. To answer the research question “What explains difference in the scope of military AI practices by states?”, the study formulates a theoretical framework integrating Strategic Culture and Sociotechnical Imaginaries as a country’s deeper discourse layers within Ole Wæver’s multi-layered discourse analysis model. This framework is then applied within a most similar systems design, controlling for realist conditions and selecting Germany and the UK as case studies with differing dominant discourses on military AI. Thereafter, detailed discourse analysis on dominant discourses on military AI is conducted for both cases, and their scope of military AI practices is determined based on the number of military AI applications, expert assessments, and specific instructions, policies, and doctrines for military AI. Germany showed a cautious dominant discourse on military AI and a limited scope of military AI practices, while the UK showed an embracing dominant discourse on military AI and a comprehensive scope of military AI practices. Hence, the discourse- theoretical framework developed in this study offers an explanation superior to realist accounts and contributes to the literature on military and security studies more broadly by offering an innovative approach to studying general enabling technologies. It also has important policy implications for AI arms control and diplomacy

    The Mechanical Turkness: Tactical Media Art and the Critique of Corporate AI

    Full text link
    The extensive industrialization of artificial intelligence (AI) since the mid-2010s has increasingly motivated artists to address its economic and sociopolitical consequences. In this chapter, I discuss interrelated art practices that thematize creative agency, crowdsourced labor, and delegated artmaking to reveal the social rootage of AI technologies and underline the productive human roles in their development. I focus on works whose poetic features indicate broader issues of contemporary AI-influenced science, technology, economy, and society. By exploring the conceptual, methodological, and ethical aspects of their effectiveness in disrupting the political regime of corporate AI, I identify several problems that affect their tactical impact and outline potential avenues for tackling the challenges and advancing the field.Comment: Matthes, J\"org, Damian Trilling, Ljubi\v{s}a Boji\'c and Simona \v{Z}iki\'c, eds. 2024. Navigating the Digital Age: An In-Depth Exploration into the Intersection of Modern Technologies and Societal Transformation. Vienna and Belgrade: Institute for Philosophy and Social Theory and University of Belgrade and Department of Communication, University of Vienn

    Employed Algorithms: A Labor Model of Corporate Liability for AI

    Get PDF
    The workforce is digitizing. Leading consultancies estimate that algorithmic systems will replace 45 percent of human-held jobs by 2030. One feature that algorithms share with the human employees they are replacing is their capacity to cause harm. Even today, corporate algorithms discriminate against loan applicants, manipulate stock markets, collude over prices, and cause traffic deaths. Ordinarily, corporate employers would be responsible for these injuries, but the rules for assessing corporate liability arose at a time when only humans could act on behalf of corporations. Those rules apply awkwardly, if at all, to silicon. Some corporations have already discovered this legal loophole and are rapidly automating business functions to limit their own liability risk. This Article seeks a way to hold corporations accountable for the harms of their digital workforce: some algorithms should be treated, for liability purposes, as corporate employees. Drawing on existing functional characterizations of employment, the Article defines the concept of an “employed algorithm” as one over which a corporation exercises substantial control and from which it derives substantial benefits. If a corporation employs an algorithm that causes criminal or civil harm, the corporation should be liable just as if the algorithm were a human employee. Plaintiffs and prosecutors could then leverage existing, employee-focused liability rules to hold corporations accountable when the digital workforce transgresses

    Limitations Of Artificial Intelligence

    Get PDF
    Artificial Intelligence is a groundbreaking technology that is now an established field. It is being used to mimic human capabilities such as speaking, listening, learning, and planning by using different algorithms to process data and produce results depending on the information provided by the user. Artificial Intelligence has been used in several industries when it comes to data processing and decision making. Artificial Intelligence has been invented to help decision and solutionmaking processes using a problem-solving approach. The development of Artificial Intelligence software provides efficiency and acceleration on different kinds of workflows, which will help organizations increase their profit and reduce wastage and costs due to poor productivity. There are already many applications that Artificial Intelligence powers; some of these are Web Search, Cybersecurity, and Machine Translations. All people are now having the benefit of using Artificial Intelligence, and it is beneficial for humanity. Artificial Intelligence has many positive aspects as it produces substantial results in people\u27s daily lives and businesses today; some of the most common Artificial Intelligence technologies used by the industry are robots and Virtual Assistants. Artificial Intelligence are powered by Natural Language Processing (NLP) and Speech Recognition Platform (SRP), but it is not limited to these two (2); many factors need to be considered, but these branches help in interpretation and manipulation of the commands stipulated. Indeed, Artificial Intelligence is rapidly advancing, and many organizations are willing to try and test out what is available in the market. However, others are not convinced with the Artificial Intelligence as there are alleged ethical issues that might cause accountability in a particular manner. This thesis will explain how Artificial Intelligence is used in different fields like Law, Medicine, the Military, and others while discussing the limitations present

    Machine ethics via logic programming

    Get PDF
    Machine ethics is an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity of moral decision-making. While some approaches provide implementations in Logic Programming (LP) systems, they have not exploited LP-based reasoning features that appear essential for moral reasoning. This PhD thesis aims at investigating further the appropriateness of LP, notably a combination of LP-based reasoning features, including techniques available in LP systems, to machine ethics. Moral facets, as studied in moral philosophy and psychology, that are amenable to computational modeling are identified, and mapped to appropriate LP concepts for representing and reasoning about them. The main contributions of the thesis are twofold. First, novel approaches are proposed for employing tabling in contextual abduction and updating – individually and combined – plus a LP approach of counterfactual reasoning; the latter being implemented on top of the aforementioned combined abduction and updating technique with tabling. They are all important to model various issues of the aforementioned moral facets. Second, a variety of LP-based reasoning features are applied to model the identified moral facets, through moral examples taken off-the-shelf from the morality literature. These applications include: (1) Modeling moral permissibility according to the Doctrines of Double Effect (DDE) and Triple Effect (DTE), demonstrating deontological and utilitarian judgments via integrity constraints (in abduction) and preferences over abductive scenarios; (2) Modeling moral reasoning under uncertainty of actions, via abduction and probabilistic LP; (3) Modeling moral updating (that allows other – possibly overriding – moral rules to be adopted by an agent, on top of those it currently follows) via the integration of tabling in contextual abduction and updating; and (4) Modeling moral permissibility and its justification via counterfactuals, where counterfactuals are used for formulating DDE.Fundação para a Ciência e a Tecnologia (FCT)-grant SFRH/BD/72795/2010 ; CENTRIA and DI/FCT/UNL for the supplementary fundin

    A Rule of Persons, Not Machines: The Limits of Legal Automation

    Get PDF
    corecore