30 research outputs found
Extending Chatbots to Probe Users: Enhancing Complex Decision-Making Through Probing Conversations
Chatbots have become commonplace-they can provide customer support, take orders, collect feedback, and even provide (mental) health support. Despite this diversity, the opportunities of designing chatbots for more complex decision-making tasks remain largely underexplored. Bearing this in mind leads us to ask: How can chatbots be embedded into software tools used for complex decision-making and designed to scaffold and probe human cognition' The goal of our research was to explore possible uses of such "probing bots". The domain we examined was stock investment where many complex decisions need to be made. In our study, different types of investors interacted with a prototype, which we called "ProberBot", and subsequently took part in in-depth interviews. They generally found our ProberBot was effective at supporting their thinking but when this is desirable depends on the type of task and activity. We discuss these and other findings as well as design considerations for developing ProberBots for similar types of decision-making tasks
May I Interrupt? Diverging Opinions on Proactive Smart Speakers
Although smart speakers support increasingly complex multi-turn dialogues, they still play a mostly reactive role, responding to user’s questions or requests. With rapid technological advances, they are becoming more capable of initiating conversations by themselves. However, before developing such proactive features, it is important to understand how people perceive different types of agent-initiated interactions. We conducted an online survey in which participants () rated 8 scenarios around proactive smart speakers on different aspects. Despite some controversy around proactive systems, we found that participants’ ratings were surprisingly positive. However, they also commented on potential issues around user privacy and agency as well as undesirable interference with ongoing (social) activities. We discuss these findings and their implications for future avenues of research on proactive smart speakers
It's Good to Talk: A Comparison of Using Voice Versus Screen-Based Interactions for Agent-Assisted Tasks
Voice assistants have become hugely popular in the home as domestic and entertainment devices. Recently, there has been a move towards developing them for work settings. For example, Alexa for Business and IBM Watson for Business were designed to improve productivity, by assisting with various tasks, such as scheduling meetings and taking minutes. However, this kind of assistance is largely limited to planning and managing user's work. How might they be developed to do more by way of empowering people at work? Our research is concerned with achieving this by developing an agent with the role of a facilitator that assists users during an ongoing task. Specifically, we were interested in whether the modality in which the agent interacts with users makes a difference: How does a voice versus screen-based agent interaction affect user behavior? We hypothesized that voice would be more immediate and emotive, resulting in more fluid conversations and interactions. Here, we describe a user study that compared the benefits of using voice versus screen-based interactions when interacting with a system incorporating an agent, involving pairs of participants doing an exploratory data analysis task that required them to make sense of a series of data visualizations. The findings from the study show marked differences between the two conditions, with voice resulting in more turn-taking in discussions, questions asked, more interactions with the system and a tendency towards more immediate, faster-paced discussions following agent prompts. We discuss the possible reasons for why talking and being prompted by a voice assistant may be preferable and more effective at mediating human-human conversations and we translate some of the key insights of this research into design implications
Understanding Circumstances for Desirable Proactive Behaviour of Voice Assistants: The Proactivity Dilemma
The next major evolutionary stage for voice assistants will be their capability to initiate interactions by themselves. However, to design proactive interactions, it is crucial to understand whether and when this behaviour is considered useful and how desirable it is perceived for different social contexts or ongoing activities. To investigate people's perspectives on proactivity and appropriate circumstances for it, we designed a set of storyboards depicting a variety of proactive actions in everyday situations and social settings and presented them to 15 participants in interactive interviews. Our findings suggest that, although many participants see benefits in agent proactivity, such as for urgent or critical issues, there are concerns about interference with social activities in multi-party settings, potential loss of agency, and intrusiveness. We discuss our implications for designing voice assistants with desirable proactive features
From C-3PO to HAL: Opening The Discourse About The Dark Side of Multi-Modal Social Agents
The increasing prevalence of communicative agents raises questions about human-agent communication and the impact of such interaction on people's behavior in society and human-human communication. This workshop aims to address three of those questions: (i) How can we identify malicious design strategies - known as dark patterns - in social agents?; (ii) What is the necessity for and the effects of present and future design features, across different modalities and social contexts, in social agents?; (iii) How can we incorporate the findings of the first two questions into the design of social agents? This workshop seeks to conjoin ongoing discourses of the CUI and wider HCI communities, including recent trends focusing on ethical designs. Out of the collaborative discussion, the workshop will produce a document distilling possible research lines and topics encouraging future collaborations
Machine learning uncovers the most robust self-report predictors of relationship quality across 43 longitudinal couples studies
Given the powerful implications of relationship quality for health and well-being, a central mission of relationship science is explaining why some romantic relationships thrive more than others. This large-scale project used machine learning (i.e., Random Forests) to 1) quantify the extent to which relationship quality is predictable and 2) identify which constructs reliably predict relationship quality. Across 43 dyadic longitudinal datasets from 29 laboratories, the top relationship-specific predictors of relationship quality were perceived-partner commitment, appreciation, sexual satisfaction, perceived-partner satisfaction, and conflict. The top individual-difference predictors were life satisfaction, negative affect, depression, attachment avoidance, and attachment anxiety. Overall, relationship-specific variables predicted up to 45% of variance at baseline, and up to 18% of variance at the end of each study. Individual differences also performed well (21% and 12%, respectively). Actor-reported variables (i.e., own relationship-specific and individual-difference variables) predicted two to four times more variance than partner-reported variables (i.e., the partner’s ratings on those variables). Importantly, individual differences and partner reports had no predictive effects beyond actor-reported relationship-specific variables alone. These findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person’s own relationship-specific experiences, and effects due to moderation by individual differences and moderation by partner-reports may be quite small. Finally, relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables. This collective effort should guide future models of relationships
Do Make me Think! How CUIs Can Support Cognitive Processes
Traditionally, a key concern of HCI has been to design interfaces that should not make the user think. While this is - and will continue to be - desirable for most systems, there are also situations in which a system that prompts and questions the user may be more appropriate. In educational systems for instance, tasks are often intentionally made more challenging to enable "deeper" thinking and more thorough learning. Although conversational interfaces are still relatively limited in their capabilities, they seem very promising for contexts where questioning is needed, such as learning, analytics or sensemaking as well as safety-critical systems. Overly simple interactions - such as when the user can just tap or drag and drop - may not be beneficial in this context or may even be risky. In this position paper, we discuss previous work as well as opportunities where questioning users through conversation can be beneficial, based on insights from our own research
Number processing after stroke: anatomoclinical correlations in oral and written codes.
Calculation and number-processing abilities were studied in 49 patients with chronic single vascular brain lesions by means of a standardized multitask assessment battery (EC301), as well as through other tasks, testing functions thought to be implicated in calculation such as language, visuo-perceptive abilities, verbal and spatial working memory, planning, and attention. The results show that (1) lesions involving parietal areas-particularly left parietal lesions-are prone to alter calculation processing. A more detailed analysis showed that patients with lesions involving left parietal areas were impaired in both digital (i.e., comprehension and production of numbers written in Arabic code) and oral (i.e., comprehension and production of numbers heard or expressed orally) processing while lesions involving right parietal areas lead to an impairment in digital processing only. However, linguistically related alphanumerical processing (i.e., comprehension and production of numbers written orthographically) was not influenced by parietal lesions. (2) Semantic representations (knowledge of the magnitude related to a given number) as well as rote arithmetical knowledge are also impaired following damage to parietal and particularly left parietal lesions, suggesting that these areas are also implicated in magnitude comparisons and in the retrieval of arithmetical facts. (3) Performance in calculation is highly correlated with language. (4) Moreover, we found a highly significant correlation between performances in oral calculation and verbal working memory, and between written-digit calculation and visuospatial working memory. Performances in regard to visuo-perceptive abilities, planning, and attention were less consistently correlated with calculation. These results stress the close correlation, but relative independence between calculation and language, as well as a dissociated sensitivity of oral and digital processing to brain lesions