353 research outputs found
In conversation with Artificial Intelligence: aligning language models with human values
Large-scale language technologies are increasingly used in various forms of
communication with humans across different contexts. One particular use case
for these technologies is conversational agents, which output natural language
text in response to prompts and queries. This mode of engagement raises a
number of social and ethical questions. For example, what does it mean to align
conversational agents with human norms or values? Which norms or values should
they be aligned with? And how can this be accomplished? In this paper, we
propose a number of steps that help answer these questions. We start by
developing a philosophical analysis of the building blocks of linguistic
communication between conversational agents and human interlocutors. We then
use this analysis to identify and formulate ideal norms of conversation that
can govern successful linguistic communication between humans and
conversational agents. Furthermore, we explore how these norms can be used to
align conversational agents with human values across a range of different
discursive domains. We conclude by discussing the practical implications of our
proposal for the design of conversational agents that are aligned with these
norms and values
Sticks and Stones May Break My Bones but Words Will Never Hurt Me...Until I See Them: A Qualitative Content Analysis of Trolls in Relation to the Gricean Maxims and (IM)Polite Virtual Speech Acts
The troll is one of the most obtrusive and disruptive bad actors on the internet. Unlike other bad actors, the troll interacts on a more personal and intimate level with other internet users. Social media platforms, online communities, comment boards, and chatroom forums provide them with this opportunity. What distinguishes these social provocateurs from other bad actors are their virtual speech acts and online behaviors. These acts aim to incite anger, shame, or frustration in others through the weaponization of words, phrases, and other rhetoric. Online trolls come in all forms and use various speech tactics to insult and demean their target audiences. The goal of this research is to investigate trolls\u27 virtual speech acts and the impact of troll-like behaviors on online communities. Using Gricean maxims and politeness theory, this study seeks to identify common vernacular, word usage, and other language behaviors that trolls use to divert the conversation, insult others, and possibly affect fellow internet usersâ mental health and well-being
Context-aware Captions from Context-agnostic Supervision
We introduce an inference technique to produce discriminative context-aware
image captions (captions that describe differences between images or visual
concepts) using only generic context-agnostic training data (captions that
describe a concept or an image in isolation). For example, given images and
captions of "siamese cat" and "tiger cat", we generate language that describes
the "siamese cat" in a way that distinguishes it from "tiger cat". Our key
novelty is that we show how to do joint inference over a language model that is
context-agnostic and a listener which distinguishes closely-related concepts.
We first apply our technique to a justification task, namely to describe why an
image contains a particular fine-grained category as opposed to another
closely-related category of the CUB-200-2011 dataset. We then study
discriminative image captioning to generate language that uniquely refers to
one of two semantically-similar images in the COCO dataset. Evaluations with
discriminative ground truth for justification and human studies for
discriminative image captioning reveal that our approach outperforms baseline
generative and speaker-listener approaches for discrimination.Comment: Accepted to CVPR 2017 (Spotlight
Reasoning About Pragmatics with Neural Listeners and Speakers
We present a model for pragmatically describing scenes, in which contrastive
behavior results from a combination of inference-driven pragmatics and learned
semantics. Like previous learned approaches to language generation, our model
uses a simple feature-driven architecture (here a pair of neural "listener" and
"speaker" models) to ground language in the world. Like inference-driven
approaches to pragmatics, our model actively reasons about listener behavior
when selecting utterances. For training, our approach requires only ordinary
captions, annotated _without_ demonstration of the pragmatic behavior the model
ultimately exhibits. In human evaluations on a referring expression game, our
approach succeeds 81% of the time, compared to a 69% success rate using
existing techniques
From partners to populations:A hierarchical Bayesian account of coordination and convention
Languages are powerful solutions to coordination problems: they provide
stable, shared expectations about how the words we say correspond to the
beliefs and intentions in our heads. Yet language use in a variable and
non-stationary social environment requires linguistic representations to be
flexible: old words acquire new ad hoc or partner-specific meanings on the fly.
In this paper, we introduce CHAI (Continual Hierarchical Adaptation through
Inference), a hierarchical Bayesian theory of coordination and convention
formation that aims to reconcile the long-standing tension between these two
basic observations. We argue that the central computational problem of
communication is not simply transmission, as in classical formulations, but
continual learning and adaptation over multiple timescales. Partner-specific
common ground quickly emerges from social inferences within dyadic
interactions, while community-wide social conventions are stable priors that
have been abstracted away from interactions with multiple partners. We present
new empirical data alongside simulations showing how our model provides a
computational foundation for several phenomena that have posed a challenge for
previous accounts: (1) the convergence to more efficient referring expressions
across repeated interaction with the same partner, (2) the gradual transfer of
partner-specific common ground to strangers, and (3) the influence of
communicative context on which conventions eventually form.Comment: In press at Psychological Revie
In conversation with Artificial Intelligence: aligning language models with human values
Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values
In conversation with Artificial Intelligence: aligning language models with human values
Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values
Editors' Review and Introduction:Lying in Logic, Language, and Cognition
We describe some recent trends in research on lying from a multidisciplinary perspective, including logic, philosophy, linguistics, psychology, cognitive science, behavioral economics, and artificial intelligence. Furthermore, we outline the seven contributions to this special issue of topiCS.</p
- âŠ