4,962 research outputs found
Designing Women: Essentializing Femininity in AI Linguistics
Since the eighties, feminists have considered technology a force capable of subverting sexism because of technologyâs ability to produce unbiased logic. Most famously, Donna Harawayâs âA Cyborg Manifestoâ posits that the cyborg has the inherent capability to transcend gender because of its removal from social construct and lack of loyalty to the natural world. But while humanoids and artificial intelligence have been imagined as inherently subversive to gender, current artificial intelligence perpetuates gender divides in labor and language as their programmers imbue them with traits considered âfeminine.â A majority of 21st century AI and humanoids are programmed to fit female stereotypes as they fulfill emotional labor and perform pink-collar tasks, whether through roles as therapists, query-fillers, or companions. This paper examines four specific chat-based AI --ELIZA, XiaoIce, Sophia, and Erica-- and examines how their feminine linguistic patterns are used to maintain the illusion of emotional understanding in regards to the tasks that they perform. Overall, chat-based AI fails to subvert gender roles, as feminine AI are relegated to the realm of emotional intelligence and labor
Smart Conversational Agents for Reminiscence
In this paper we describe the requirements and early system design for a
smart conversational agent that can assist older adults in the reminiscence
process. The practice of reminiscence has well documented benefits for the
mental, social and emotional well-being of older adults. However, the
technology support, valuable in many different ways, is still limited in terms
of need of co-located human presence, data collection capabilities, and ability
to support sustained engagement, thus missing key opportunities to improve care
practices, facilitate social interactions, and bring the reminiscence practice
closer to those with less opportunities to engage in co-located sessions with a
(trained) companion. We discuss conversational agents and cognitive services as
the platform for building the next generation of reminiscence applications, and
introduce the concept application of a smart reminiscence agent
The Medical Authority of AI: A Study of AI-enabled Consumer-facing Health Technology
Recently, consumer-facing health technologies such as Artificial Intelligence
(AI)-based symptom checkers (AISCs) have sprung up in everyday healthcare
practice. AISCs solicit symptom information from users and provide medical
suggestions and possible diagnoses, a responsibility that people usually
entrust with real-person authorities such as physicians and expert patients.
Thus, the advent of AISCs begs a question of whether and how they transform the
notion of medical authority in everyday healthcare practice. To answer this
question, we conducted an interview study with thirty AISC users. We found that
users assess the medical authority of AISCs using various factors including
automated decisions and interaction design patterns of AISC apps, associations
with established medical authorities like hospitals, and comparisons with other
health technologies. We reveal how AISCs are used in healthcare delivery,
discuss how AI transforms conventional understandings of medical authority, and
derive implications for designing AI-enabled health technology
Neurocognitive Informatics Manifesto.
Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given
People's Perceptions Toward Bias and Related Concepts in Large Language Models: A Systematic Review
Large language models (LLMs) have brought breakthroughs in tasks including
translation, summarization, information retrieval, and language generation,
gaining growing interest in the CHI community. Meanwhile, the literature shows
researchers' controversial perceptions about the efficacy, ethics, and
intellectual abilities of LLMs. However, we do not know how lay people perceive
LLMs that are pervasive in everyday tools, specifically regarding their
experience with LLMs around bias, stereotypes, social norms, or safety. In this
study, we conducted a systematic review to understand what empirical insights
papers have gathered about people's perceptions toward LLMs. From a total of
231 retrieved papers, we full-text reviewed 15 papers that recruited human
evaluators to assess their experiences with LLMs. We report different biases
and related concepts investigated by these studies, four broader LLM
application areas, the evaluators' perceptions toward LLMs' performances
including advantages, biases, and conflicting perceptions, factors influencing
these perceptions, and concerns about LLM applications
Towards Emotion-Sensitive Conversational User Interfaces in Healthcare Applications
Perception of emotions and adequate responses are key factors of a successful conversational agent. However, determining emotions in a healthcare setting depends on multiple factors such as context and medical condition. Given the increase of interest in conversational agents integrated in mobile health applications, our objective in this work is to introduce a concept for analyzing emotions and sentiments expressed by a person in a mobile health application with a conversational user interface. The approach bases upon bot technology (Synthetic intelligence markup language) and deep learning for emotion analysis. More specifically, expressions referring to sentiments or emotions are classified along seven categories and three stages of strengths using treebank annotation and recursive neural networks. The classification result is used by the chatbot for selecting an appropriate response. In this way, the concerns of a user can be better addressed. We describe three use cases where the approach could be integrated to make the chatbot emotion-sensitive
Conversational Agents for Mental Health and Well-being: Discovering Design Recommendations Using Text Mining
Conversational agents are increasingly being used by the general population due to shortages in healthcare providers and specialists, and limited access to treatments. They are also used by people to deal with loneliness and lack of companionship. As these apps are increasingly replacing real humans, there is a need to explore their design features and limitations for better design of conversational apps. Using text mining and topic modeling, this study analyzed a total of 126,610 reviews about Replika, a popular and well-established conversational agent mobile app. Our results emphasized current practices for designing conversational apps while at the same time sheds the light on limitations associated with these apps. Such limitations are related to the need for better conversations and intelligent responses, the need for advanced AI chatbots, the need to avoid questionable and inappropriate content, the need for inclusive design, and the need to address some technical limitations
- âŠ