Dialogue Systems Specialized in Social Influence: Systems, Methods, and Ethics

Abstract

This thesis concerns the task of how to develop dialogue systems specialized in social influence and problems around deploying such systems. Dialogue systems have become widely adopted in our daily life. Most dialogue systems are primarily focused on information-seeking tasks or social companionship. However, they cannot apply strategies in complex and critical social influence tasks, such as healthy habit promotion, emotional support, etc. In this work, we formally define social influence dialogue systems to be systems that influence users’ behaviors, feelings, thoughts, or opinions through natural conversations. We also present methods to make such systems intelligible, privacy-preserving, and thus deployable in real life. Finally, we acknowledge potential ethical issues around social influence systems and propose solutions to mitigate them in Chapter 6. Social influence dialogues span various domains, such as persuasion, negotiation, and recommendation. We first propose a donation persuasion task, PERSUASIONFORGOOD, and ground our study on this persuasion task for social good. We then build a persuasive dialogue system, by refining the dialogue model for intelligibility and imitating human experts for persuasiveness, and a negotiation agent that can play the game of Diplomacy by decoupling the planning engine and the dialogue generation module to improve controllability of social influence systems. To deploy such a system in the wild, our work examines how humans perceive the AI agent’s identity, and how their perceptions impact the social influence outcome. Moreover, dialogue models are trained on conversations, where people could share personal information. This creates privacy concerns for deployment as the models may memorize private information. To protect user privacy in the training data, our work develops privacy-preserving learning algorithms to ensure deployed models are safe under privacy attacks. Finally, deployed dialogue agents have the potential to integrate human feedback to continuously improve themselves. So we propose JUICER, a framework to make use of both binary and free-form textual human feedback to augment the training data and keep improving dialogue model performance after deployment. Building social influence dialogue systems enables us to research future expert-level AI systems that are accessible via natural languages, accountable with domain knowledge, and privacy-preserving with privacy guarantees

    Similar works