11,872 research outputs found
The Contribution of Society to the Construction of Individual Intelligence
It is argued that society is a crucial factor in the construction of individual intelligence. In other words that it is important that intelligence is socially situated in an analogous way to the physical situation of robots. Evidence that this may be the case is taken from developmental linguistics, the social intelligence hypothesis, the complexity of society, the need for self-reflection and autism. The consequences for the development of artificial social agents is briefly considered. Finally some challenges for research into socially situated intelligence are highlighted
Designing Women: Essentializing Femininity in AI Linguistics
Since the eighties, feminists have considered technology a force capable of subverting sexism because of technologyâs ability to produce unbiased logic. Most famously, Donna Harawayâs âA Cyborg Manifestoâ posits that the cyborg has the inherent capability to transcend gender because of its removal from social construct and lack of loyalty to the natural world. But while humanoids and artificial intelligence have been imagined as inherently subversive to gender, current artificial intelligence perpetuates gender divides in labor and language as their programmers imbue them with traits considered âfeminine.â A majority of 21st century AI and humanoids are programmed to fit female stereotypes as they fulfill emotional labor and perform pink-collar tasks, whether through roles as therapists, query-fillers, or companions. This paper examines four specific chat-based AI --ELIZA, XiaoIce, Sophia, and Erica-- and examines how their feminine linguistic patterns are used to maintain the illusion of emotional understanding in regards to the tasks that they perform. Overall, chat-based AI fails to subvert gender roles, as feminine AI are relegated to the realm of emotional intelligence and labor
Challenges for an Ontology of Artificial Intelligence
Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What âareâ these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be assimilated and regarded as ânormal,â and (3) the tendency of human beings to anthropomorphize. This list is not intended as exhaustive, nor is it seen to preclude entirely a clear ontology, however, these challenges are a necessary set of topics for consideration. Each of these factors is seen to present a 'moving target' for discussion, which poses a challenge for both technical specialists and non-practitioners of AI systems development (e.g., philosophers and theologians) to speak meaningfully given that the corpus of AI structures and capabilities evolves at a rapid pace. Finally, we present avenues for moving forward, including opportunities for collaborative synthesis for scholars in philosophy and science
First Steps Towards an Ethics of Robots and Artificial Intelligence
This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognise that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities
Universal Intelligence: A Definition of Machine Intelligence
A fundamental problem in artificial intelligence is that nobody really knows
what intelligence is. The problem is especially acute when we need to consider
artificial systems which are significantly different to humans. In this paper
we approach this problem in the following way: We take a number of well known
informal definitions of human intelligence that have been given by experts, and
extract their essential features. These are then mathematically formalised to
produce a general measure of intelligence for arbitrary machines. We believe
that this equation formally captures the concept of machine intelligence in the
broadest reasonable sense. We then show how this formal definition is related
to the theory of universal optimal learning agents. Finally, we survey the many
other tests and definitions of intelligence that have been proposed for
machines.Comment: 50 gentle page
Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making
The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep âhumans in the loopâ (âHITLâ). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related valuesâlegitimacy, dignity, and so forthâare vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority?
Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating âbehind the curtain.â Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation
- âŠ