193 research outputs found
CUI@CHI: Mapping Grand Challenges for the Conversational User Interface Community
The aim of this workshop is twofold. First, it aims to grow critical mass in Conversational User Interfaces (CUI) research by mapping the grand challenges in designing and researching these interactions. Second, this workshop is intended to further build the CUI community with these challenges in mind, whilst also growing CUI research presence at CHI. In particular, the workshop will survey and map topics such as: interaction design for text and voice-based CUI; the interplay between engineering efforts such as in Natural language Processing (NLP) and the design of CUI; practical CUI applications (e.g. human-robot interaction, public spaces, hands-free and wearables); and social, contextual, and cultural aspects of CUI design (e.g. ethics, privacy, trust, information exploration, persuasion, well-being, or decision-making, marginalized users). By drawing from the diverse interdisciplinary expertise that defines CHI, we are proposing this workshop as a platform on which to build a community that is better equipped to tackle an emerging field that is rapidly-evolving, yet is under-studied—especially as the commercial advances seem to outpace the scholarly research in this space
Who are CUIs Really For? Representation and Accessibility in the Conversational User Interface Literature
The theme for CUI 2023 is 'designing for inclusive conversation', but who are
CUIs really designed for? The field has its roots in computer science, which
has a long acknowledged diversity problem. Inspired by studies mapping out the
diversity of the CHI and voice assistant literature, we set out to investigate
how these issues have (or have not) shaped the CUI literature. To do this we
reviewed the 46 full-length research papers that have been published at CUI
since its inception in 2019. After detailing the eight papers that engage with
accessibility, social interaction, and performance of gender, we show that 90%
of papers published at CUI with user studies recruit participants from Europe
and North America (or do not specify). To complement existing work in the
community towards diversity we discuss the factors that have contributed to the
current status quo, and offer some initial suggestions as to how we as a CUI
community can continue to improve. We hope that this will form the beginning of
a wider discussion at the conference.Comment: To appear in the Proceedings of the 2023 ACM conference on
Conversational User Interfaces (CUI 23
A Review of Evaluation Techniques for Social Dialogue Systems
In contrast with goal-oriented dialogue, social dialogue has no clear measure
of task success. Consequently, evaluation of these systems is notoriously hard.
In this paper, we review current evaluation methods, focusing on automatic
metrics. We conclude that turn-based metrics often ignore the context and do
not account for the fact that several replies are valid, while end-of-dialogue
rewards are mainly hand-crafted. Both lack grounding in human perceptions.Comment: 2 page
eMedication Meets eHealth with the Electronic Medication Management Assistant (eMMA).
Background: A patient’s healthcare team is often missing a complete
overview on the prescribed and dispensed medication. This is due to an inconsistent
information flow between the different actors of the healthcare system. Often, only
the patient himself knows exactly which drugs he is actually taking. Objectives: Our
objective is to exploit different eHealth technologies available or planned in
Switzerland to improve the information flow of the medication data among the
stakeholder and to support the patient in managing his medication.Methods: This
work is embedded in the "Hospital of the Future Live" project, involving 16
companies and 6 hospitals in order to develop IT solutions for future optimized
health care processes. A comprehensive set of requirements was collected from the
different actors and project partners. Further, specifications of the available or
planned eHealth infrastructure were reviewed to integrate relevant technologies into
a coherent concept. Results: We developed a concept that combines the medication
list and an eHealth platform. The resulting electronic medication management
assistant (eMMA) designed for the patient provides the current medication plan at
any time and supports by providing relevant information through a conversational
user interface. Conclusion: In Switzerland, we still need a bridging technology to
combine the medication information from the electronic patient record with the
medication plan's associated QR-Code. The developed app is intended to provide
such bridge and demonstrates the usefulness of the eMediplan. It enables the patient
to have all data regarding his medication on his personal mobile phone and he can -
if necessary - provide the current medication to the health professional.
Keywords. EHealth, Electronic Prescription, Medication Safety, Medication
System, Conversational UI, mHealt
A Mobile System for Music Anamnesis and Receptive Music Therapy in the Personal Home.
Receptive music therapy is active hearing of music that is specifically selected to cause a certain effect on a person, such as pain reduction, mental opening, confrontation etc. This active, guided hearing could be helpful as a supporting ritual for patients at home and could extend traditional therapy.
However, patients are often unable to select the music pieces that might be helpful for them in a current situation. We are suggesting a self-learning decision support system that allows a patient to answer questions on music anamnesis, is ready for inclusion into an electronic health record, and which enables a therapist to compile a therapeutic music program for the patient at home. Beyond this, the system also suggests appropriate music and duration of listening based on the patient’s reported current mental state. In this paper, a concept for such a mobile system for receptive music therapy will be proposed.
Keywords
Music Therapy; Decision Support Techniques; Mobile Application
Conceptual Model Interpreter for Large Language Models
Large Language Models (LLMs) recently demonstrated capabilities for
generating source code in common programming languages. Additionally,
commercial products such as ChatGPT 4 started to provide code interpreters,
allowing for the automatic execution of generated code fragments, instant
feedback, and the possibility to develop and refine in a conversational
fashion. With an exploratory research approach, this paper applies code
generation and interpretation to conceptual models. The concept and prototype
of a conceptual model interpreter is explored, capable of rendering visual
models generated in textual syntax by state-of-the-art LLMs such as Llama~2 and
ChatGPT 4. In particular, these LLMs can generate textual syntax for the
PlantUML and Graphviz modeling software that is automatically rendered within a
conversational user interface. The first result is an architecture describing
the components necessary to interact with interpreters and LLMs through APIs or
locally, providing support for many commercial and open source LLMs and
interpreters. Secondly, experimental results for models generated with ChatGPT
4 and Llama 2 are discussed in two cases covering UML and, on an instance
level, graphs created from custom data. The results indicate the possibility of
modeling iteratively in a conversational fashion.Comment: ER Forum 2023, 42nd International Conference on Conceptual Modeling
(ER 2023), November 6-9, 2023, Lisbon, P
- …