127 research outputs found
Recommender systems and their ethical challenges
This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system
Philosophy and Computing in Information Societies
Philosophy and computing have often been related in the history of human culture. In the age of the information revolution, this relation has grown to define an entire area of philosophical enquiry. Over the past decades we have thought of this area as delineated by two Cartesian axes: conceptual and methodological
Ethical aspects of multi-stakeholder recommendation systems
This article analyses the ethical aspects of multistakeholder recommendation systems (RSs). Following the most common approach in the literature, we assume a consequentialist framework to introduce the main concepts of multistakeholder recommendation. We then consider three research questions: who are the stakeholders in a RS? How are their interests taken into account when formulating a recommendation? And, what is the scientific paradigm underlying RSs? Our main finding is that multistakeholder RSs (MRSs) are designed and theorised, methodologically, according to neoclassical welfare economics. We consider and reply to some methodological objections to MRSs on this basis, concluding that the multistakeholder approach offers the resources to understand the normative social dimension of RS
Accountability in artificial intelligence: what it is and how it works
Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, and implications). We analyze this architecture through four accountability goals (compliance, report, oversight, and enforcement). We argue that these goals are often complementary and that policy-makers emphasize or prioritize some over others depending on the proactive or reactive use of accountability and the missions of AI governance
Open source intelligence and AI: a systematic review of the GELSI literature
Today, open source intelligence (OSINT), i.e., information derived from publicly available sources, makes up between 80 and 90 percent of all intelligence activities carried out by Law Enforcement Agencies (LEAs) and intelligence services in the West. Developments in data mining, machine learning, visual forensics and, most importantly, the growing computing power available for commercial use, have enabled OSINT practitioners to speed up, and sometimes even automate, intelligence collection and analysis, obtaining more accurate results more quickly. As the infosphere expands to accommodate ever-increasing online presence, so does the pool of actionable OSINT. These developments raise important concerns in terms of governance, ethical, legal, and social implications (GELSI). New and crucial oversight concerns emerge alongside standard privacy concerns, as some of the more advanced data analysis tools require little to no supervision. This article offers a systematic review of the relevant literature. It analyzes 571 publications to assess the current state of the literature on the use of AI-powered OSINT (and the development of OSINT software) as it relates to the GELSI framework, highlighting potential gaps and suggesting new research directions
Supporting Trustworthy AI Through Machine Unlearning
Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology
What is data ethics?
This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data Ethics builds on the foundation provided by Computer and Information Ethics but, at the same time, it refines the approach endorsed so far in this research field, by shifting the Level of Abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even the data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations — the interactions among hardware, software, and data — rather than on the variety of digital technologies that enables them. And it emphasises the complexity of the ethical challenges posed by Data Science. Because of such complexity, Data Ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of Data Science and its applications within a consistent, holistic, and inclusive framework. Only as a macroethics Data Ethics will provide the solutions that can maximise the value of Data Science for our societies, for all of us, and for our environments
Deterrence by Norms to Stop Interstate Cyber Attacks
In April 2017, the foreign ministers of the G7 countries approved a ‘Declaration on
Responsible States Behaviour in Cyberspace’ (G7 Declaration 2017). The Declaration
addresses a mounting concern about international stability and the security of our
societies after the fast-pace escalation of cyber attacks occurred during the past
decade. In the opening statement, the G7 ministers stress their concern
[…] about the risk of escalation and retaliation in cyberspace […]. Such
activities could have a destabilizing effect on international peace and security.
We stress that the risk of interstate conflict as a result of ICT incidents has
emerged as a pressing issue for consideration. […], (G7 Declaration 2017, 1).
Paradoxically, state actors often play a central role in the escalation of cyber attacks.
State-run cyber attacks have been launched for espionage and sabotage purposes
since 2003. Well-known examples include Titan Rain (2003), the Russian attack
against Estonia (2006) and Georgia (2008), Red October targeting mostly Russia
and Eastern European Countries (2007), Stuxnet and Operation Olympic Game
against Iran (2006–2012). In 2016, a new wave of state-run (or state-sponsored)
cyber attacks ranged from the Russian cyber attack against Ukraine power plant,1 to
the Chinese and Russian infiltrations US Federal Offices,2 to the Shamoon/Greenbag
cyber-attacks on government infrastructures in Saudi Arabia.3
This trend will continue. The relatively low entry-cost and the high chances of success
mean that states will keep developing, relying on, and deploying cyber attacks. At the
same time, the ever more likely AI leap of cyber capabilities (Cath et al. 2017)—the use
of AI and Machine Learning techniques for cyber offence and defence—indicates that
cyber attacks will escalate in frequency, impact, and sophistication.
Historically, escalation of interstate conflicts has been arrested using offensive or
political strategies, sometimes in combination. Both have been deployed in
cyberspace. The first failed; the second needs to be consolidated and enforced
(Taddeo and Glorioso 2016a, b).</p
- …