6,667 research outputs found

    Effective Natural Language Interfaces for Data Visualization Tools

    Get PDF
    How many Covid cases and deaths are there in my hometown? How much money was invested into renewable energy projects across states in the last 5 years? How large was the biggest investment in solar energy projects in the previous year? These questions and others are of interest to users and can often be answered by data visualization tools (e.g., COVID-19 dashboards) provided by governmental organizations or other institutions. However, while users in organizations or private life with limited expertise with data visualization tools (hereafter referred to as end users) are also interested in these topics, they do not necessarily have knowledge of how to use these data visualization tools effectively to answer these questions. This challenge is highlighted by previous research that provided evidence suggesting that while business analysts and other experts can effectively use these data visualization tools, end users with limited expertise with data visualization tools are still impeded in their interactions. One approach to tackle this problem is natural language interfaces (NLIs) that provide end users with a more intuitive way of interacting with these data visualization tools. End users would be enabled to interact with the data visualization tool both by utilizing the graphical user interface (GUI) elements and by just typing or speaking a natural language (NL) input to the data visualization tool. While NLIs for data visualization tools have been regarded as a promising approach to improving the interaction, two design challenges still remain. First, existing NLIs for data visualization tools still target users who are familiar with the technology, such as business analysts. Consequently, the unique design required by end users that address their specific characteristics and that would enable the effective use of data visualization tools by them is not included in existing NLIs for data visualization tools. Second, developers of NLIs for data visualization tools are not able to foresee all NL inputs and tasks that end users want to perform with these NLIs for data visualization tools. Consequently, errors still occur in current NLIs for data visualization tools. End users need to be therefore enabled to continuously improve and personalize the NLI themselves by addressing these errors. However, only limited work exists that focus on enabling end users in teaching NLIs for data visualization tools how to correctly respond to new NL inputs. This thesis addresses these design challenges and provides insights into the related research questions. Furthermore, this thesis contributes prescriptive knowledge on how to design effective NLIs for data visualization tools. Specifically, this thesis provides insights into how data visualization tools can be extended through NLIs to improve their effective use by end users and how to enable end users to effectively teach NLIs how to respond to new NL inputs. Furthermore, this thesis provides high-level guidance that developers and providers of data visualization tools can utilize as a blueprint for developing data visualization tools with NLIs for end users and outlines future research opportunities that are of interest in supporting end users to effectively use data visualization tools

    Presence and rehabilitation: toward second-generation virtual reality applications in neuropsychology

    Get PDF
    Virtual Reality (VR) offers a blend of attractive attributes for rehabilitation. The most exploited is its ability to create a 3D simulation of reality that can be explored by patients under the supervision of a therapist. In fact, VR can be defined as an advanced communication interface based on interactive 3D visualization, able to collect and integrate different inputs and data sets in a single real-like experience. However, "treatment is not just fixing what is broken; it is nurturing what is best" (Seligman & Csikszentmihalyi). For rehabilitators, this statement supports the growing interest in the influence of positive psychological state on objective health care outcomes. This paper introduces a bio-cultural theory of presence linking the state of optimal experience defined as "flow" to a virtual reality experience. This suggests the possibility of using VR for a new breed of rehabilitative applications focused on a strategy defined as transformation of flow. In this view, VR can be used to trigger a broad empowerment process within the flow experience induced by a high sense of presence. The link between its experiential and simulative capabilities may transform VR into the ultimate rehabilitative device. Nevertheless, further research is required to explore more in depth the link between cognitive processes, motor activities, presence and flow

    Measuring readiness-to-hand through differences in attention to the task vs. attention to the tool

    Get PDF
    New interaction techniques, like multi-touch, tangible inter-action, and mid-air gestures often promise to be more intuitive and natural; however, there is little work on how to measure these constructs. One way is to leverage the phenomenon of tool embodiment—when a tool becomes an extension of one’s body, attention shifts to the task at hand, rather than the tool itself. In this work, we constructed a framework to measure tool embodiment by incorporating philosophical and psychological concepts. We applied this framework to design and conduct a study that uses attention to measure readiness-to-hand with both a physical tool and a virtual tool. We introduce a novel task where participants use a tool to rotate an object, while simultaneously responding to visual stimuli both near their hand and near the task. Our results showed that participants paid more attention to the task than to both kinds of tool. We also discuss how this evaluation framework can be used to investigate whether novel interaction techniques allow for this kind of tool embodiment.Postprin

    Voice-controlled in-vehicle infotainment system

    Get PDF
    Abstract. Speech is a form of a human to human communication that can convey information in a context-rich way that is natural to humans. The naturalness enables us to speak while doing other things, such as driving a vehicle. With the advancement of computing technologies, more and more personal services are introduced for the in-vehicle environment. A limiting factor for these advancements is the impact they cause towards driver distraction with the increased cognitive stress load. This has led to developing in-vehicle devices and applications with a heightened focus on lessening distraction. Amazon Alexa is a natural language processing system that enables its users to receive information and operate smart devices with their voices. This Master’s thesis aims to demonstrate how Alexa could be utilized when operating the in-vehicle infotainment (IVI) systems. This research was conducted by utilizing the design science research methodology. The feasibility of voice-based interaction was assessed by implementing the system as a demonstrable use-case in collaboration with the APPSTACLE project. Prior research was gathered by conducting a literature review on voice-based interaction and its integration to the vehicular domain. The system was designed by applying existing theories together with the requirements of the application domain. The designed system utilized the Amazon Alexa ecosystem and AWS services to provide the vehicular environment with new functionalities. Access to cloud-based speech processing and decision-making makes it possible to design an extendable speech interface where the driver can carry out secondary tasks by using their voice, such as requesting navigation information. The evaluation was done by comparing the system’s performance against the derived requirements. With the results of the evaluation process, the feasibility of the system could be assessed against the objectives of the study: The resulting artefact enables the user to operate the in-vehicle infotainment system while focusing on a separate task. The research proved that speech interfaces with modern technology can improve the handling of secondary tasks while driving, and the resulting system was operable without introducing additional distractions to the driver. The resulting artefact can be integrated into similar systems and used as a base tool for future research on voice-controlled interfaces

    Exploring the Affective Loop

    Get PDF
    Research in psychology and neurology shows that both body and mind are involved when experiencing emotions (Damasio 1994, Davidson et al. 2003). People are also very physical when they try to communicate their emotions. Somewhere in between beings consciously and unconsciously aware of it ourselves, we produce both verbal and physical signs to make other people understand how we feel. Simultaneously, this production of signs involves us in a stronger personal experience of the emotions we express. Emotions are also communicated in the digital world, but there is little focus on users' personal as well as physical experience of emotions in the available digital media. In order to explore whether and how we can expand existing media, we have designed, implemented and evaluated /eMoto/, a mobile service for sending affective messages to others. With eMoto, we explicitly aim to address both cognitive and physical experiences of human emotions. Through combining affective gestures for input with affective expressions that make use of colors, shapes and animations for the background of messages, the interaction "pulls" the user into an /affective loop/. In this thesis we define what we mean by affective loop and present a user-centered design approach expressed through four design principles inspired by previous work within Human Computer Interaction (HCI) but adjusted to our purposes; /embodiment/ (Dourish 2001) as a means to address how people communicate emotions in real life, /flow/ (Csikszentmihalyi 1990) to reach a state of involvement that goes further than the current context, /ambiguity/ of the designed expressions (Gaver et al. 2003) to allow for open-ended interpretation by the end-users instead of simplistic, one-emotion one-expression pairs and /natural but designed expressions/ to address people's natural couplings between cognitively and physically experienced emotions. We also present results from an end-user study of eMoto that indicates that subjects got both physically and emotionally involved in the interaction and that the designed "openness" and ambiguity of the expressions, was appreciated and understood by our subjects. Through the user study, we identified four potential design problems that have to be tackled in order to achieve an affective loop effect; the extent to which users' /feel in control/ of the interaction, /harmony and coherence/ between cognitive and physical expressions/,/ /timing/ of expressions and feedback in a communicational setting, and effects of users' /personality/ on their emotional expressions and experiences of the interaction

    Usability of vision-based interfaces

    Get PDF
    Vision-based interfaces can employ gestures to interact with an interactive system without touching it. Gestures are frequently modelled in laboratories, and usability testing should be carried out. However, often these interfaces present usability issues, and the great diversity of uses of these interfaces and the applications where they are used, makes it difficult to decide which factors to take into account in a usability test. In this paper, we review the literature to compile and analyze the usability factors and metrics used for vision-based interfaces.Postprint (published version

    Implementing Choices in Chatbot-initiated Service Interactions: Helpful or Harmful?

    Get PDF
    Chatbots are increasingly equipped to provide choices for customers to click and choose from when communicating with the chatbots. This research investigates when and why implementing choices enhances or impairs customers’ service experience. Based on the concept of fluency, we posit that the implementation of choices is beneficial only after a conversational breakdown occurs because the value of choice provision for facilitating fluency may not be recognizable or realized in the absence of service breakdowns. We further propose that the implementation of choices is counterproductive when the choice set is perceived as incomprehensive because it decreases the perception of fluency. We conducted several experiments to test these hypotheses. By illuminating when and why choice implementation may help or harm customers during a chatbot-initiated service interaction, we augment the current understanding of a chatbot’s role in customers’ service experience and provide insights for the deployment of choice-equipped chatbots in customer service

    A Study of Non-Linguistic Utterances for Social Human-Robot Interaction

    Get PDF
    The world of animation has painted an inspiring image of what the robots of the future could be. Taking the robots R2D2 and C3PO from the Star Wars films as representative examples, these robots are portrayed as being more than just machines, rather, they are presented as intelligent and capable social peers, exhibiting many of the traits that people have also. These robots have the ability to interact with people, understand us, and even relate to us in very personal ways through a wide repertoire of social cues. As robotic technologies continue to make their way into society at large, there is a growing trend toward making social robots. The field of Human-Robot Interaction concerns itself with studying, developing and realising these socially capable machines, equipping them with a very rich variety of capabilities that allow them to interact with people in natural and intuitive ways, ranging from the use of natural language, body language and facial gestures, to more unique ways such as expression through colours and abstract sounds. This thesis studies the use of abstract, expressive sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these. This work presents a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Furthermore, people's affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues

    Game-Play Breakdowns and Breakthroughs: Exploring the Relationship Between Action, Understanding, and Involvement

    Get PDF
    Game developers have to ensure their games are appealing to, and playable by, a range of people. However, although there has been interest in the game-play experience, we know little about how learning relates to player involvement. This is despite challenge being an integral part of game-play, providing players with potential opportunities to learn. This article reports on a multiple case-study approach that explored how learning and involvement come together in practice. Participants consisted of a mix of gamers and casual players. Data included interviews, multiple observations of game-play, postplay cued interviews, and diary entries. A set of theoretical claims representing suggested relationships between involvement and learning were developed on the basis of previous literature; these were then assessed through a critical examination of the data set. The resulting theory is presented as 14 refined claims that relate to micro and macro involvement; breakdowns and breakthroughs in action, understanding, and involvement; progress; and agency, meaning and compelling game-play. The claims emphasize how players experience learning via breakthroughs in understanding, where involvement is increased when the player feels responsible for progress. Supporting the relationship between learning and involvement is important for ensuring the success of commercial and educational games
    • …
    corecore