384,277 research outputs found

    Understanding social machines

    No full text
    The framework introduced in this paper aims to reflect the characteristics that social machines have been described to have. The framework uses a mixed methods approach underpinned by social theory to provide a detailed and rich understanding of the socio-technical nature of a social machine. The strength of this lies in the diversity of the data being used; whilst the quantitative approach can provide mathematical rigor to the structure and properties of the networks and appreciate its scale, the qualitative approach seeks to examine the 'social relations', and the context to how the social machine is enabling humans and technologies to interact and shape each other. Like many studies using empirical-based research, this framework takes advantage of the complementary nature that mixed methods offers, and pushes it further by using an analytical socio-technical lens.<br/

    Language Without Words: A Pointillist Model for Natural Language Processing

    Full text link
    This paper explores two separate questions: Can we perform natural language processing tasks without a lexicon?; and, Should we? Existing natural language processing techniques are either based on words as units or use units such as grams only for basic classification tasks. How close can a machine come to reasoning about the meanings of words and phrases in a corpus without using any lexicon, based only on grams? Our own motivation for posing this question is based on our efforts to find popular trends in words and phrases from online Chinese social media. This form of written Chinese uses so many neologisms, creative character placements, and combinations of writing systems that it has been dubbed the "Martian Language." Readers must often use visual queues, audible queues from reading out loud, and their knowledge and understanding of current events to understand a post. For analysis of popular trends, the specific problem is that it is difficult to build a lexicon when the invention of new ways to refer to a word or concept is easy and common. For natural language processing in general, we argue in this paper that new uses of language in social media will challenge machines' abilities to operate with words as the basic unit of understanding, not only in Chinese but potentially in other languages.Comment: 5 pages, 2 figure

    This machine could bite: On the role of non-benign art robots

    Get PDF
    The social robot's current and anticipated roles as butler, teacher, receptionist or carer for the elderly share a fundamental anthropocentric bias: they are designed to be benign, to facilitate a transaction that aims to be both useful to and simple for the human. At a time when intelligent machines are becoming a tangible prospect, such a bias does not leave much room for exploring and understanding the ongoing changes affecting the relation between humans and our technological environment. Can art robots – robots invented by artists – offer a non-benign-by-default perspective that opens the field for a machine to express its machinic potential beyond the limits imposed by an anthropocentric and market-driven approach? The paper addresses these questions by considering and contextualising early cybernetic machines, current developments in social robotics, and art robots by the author and other artists

    Robots, language, and meaning

    Get PDF
    People use language to exchange ideas and influence the actions of others through shared conceptions of word meanings, and through a shared understanding of how word meanings are combined. Under the surface form of words lie complex networks of mental structures and processes that give rise to the richly textured semantics of natural language. Machines, in contrast, are unable to use language in human-like ways due to fundamental limitations of current computational approaches to semantic representation. To address these limitations, and to serve as a catalyst for exploring alternative approaches to language and meaning, we are developing conversational robots. The problem of endowing robots with language highlights the impossibility of isolating language from other cognitive processes. Instead, we embrace a holistic approach in which various non-linguistic elements of perception, action, and memory, provide the foundations for grounding word meaning. I will review recent results in grounding language in perception and action and sketch ongoing work for grounding a wider range of words including social terms such as "I" and "my"

    Human-agent collectives

    No full text
    We live in a world where a host of computer systems, distributed throughout our physical and information environments, are increasingly implicated in our everyday actions. Computer technologies impact all aspects of our lives and our relationship with the digital has fundamentally altered as computers have moved out of the workplace and away from the desktop. Networked computers, tablets, phones and personal devices are now commonplace, as are an increasingly diverse set of digital devices built into the world around us. Data and information is generated at unprecedented speeds and volumes from an increasingly diverse range of sources. It is then combined in unforeseen ways, limited only by human imagination. People’s activities and collaborations are becoming ever more dependent upon and intertwined with this ubiquitous information substrate. As these trends continue apace, it is becoming apparent that many endeavours involve the symbiotic interleaving of humans and computers. Moreover, the emergence of these close-knit partnerships is inducing profound change. Rather than issuing instructions to passive machines that wait until they are asked before doing anything, we will work in tandem with highly inter-connected computational components that act autonomously and intelligently (aka agents). As a consequence, greater attention needs to be given to the balance of control between people and machines. In many situations, humans will be in charge and agents will predominantly act in a supporting role. In other cases, however, the agents will be in control and humans will play the supporting role. We term this emerging class of systems human-agent collectives (HACs) to reflect the close partnership and the flexible social interactions between the humans and the computers. As well as exhibiting increased autonomy, such systems will be inherently open and social. This means the participants will need to continually and flexibly establish and manage a range of social relationships. Thus, depending on the task at hand, different constellations of people, resources, and information will need to come together, operate in a coordinated fashion, and then disband. The openness and presence of many distinct stakeholders means participation will be motivated by a broad range of incentives rather than diktat. This article outlines the key research challenges involved in developing a comprehensive understanding of HACs. To illuminate this agenda, a nascent application in the domain of disaster response is presented

    How Humans Judge Machines

    Get PDF
    How people judge humans and machines differently, in scenarios involving natural disasters, labor displacement, policing, privacy, algorithmic bias, and more. How would you feel about losing your job to a machine? How about a tsunami alert system that fails? Would you react differently to acts of discrimination depending on whether they were carried out by a machine or by a human? What about public surveillance? How Humans Judge Machines compares people's reactions to actions performed by humans and machines. Using data collected in dozens of experiments, this book reveals the biases that permeate human-machine interactions. Are there conditions in which we judge machines unfairly? Is our judgment of machines affected by the moral dimensions of a scenario? Is our judgment of machine correlated with demographic factors such as education or gender? CĂ©sar Hidalgo and colleagues use hard science to take on these pressing technological questions. Using randomized experiments, they create revealing counterfactuals and build statistical models to explain how people judge artificial intelligence and whether they do it fairly. Through original research, How Humans Judge Machines bring us one step closer to understanding the ethical consequences of AI. Written by CĂ©sar A. Hidalgo, the author of Why Information Grows and coauthor of The Atlas of Economic Complexity (MIT Press), together with a team of social psychologists (Diana Orghian and Filipa de Almeida) and roboticists (Jordi Albo-Canals), How Humans Judge Machines presents a unique perspective on the nexus between artificial intelligence and society. Anyone interested in the future of AI ethics should explore the experiments and theories in How Humans Judge Machines
    • 

    corecore