14 research outputs found

    Adapting a humanoid robot for use with children with profound and multiple disabilities

    Get PDF
    With all the developments in information technology (IT) for people with disabilities, few interventions have been designed for people with profound and multiple disabilities as there is little incentive for companies to design and manufacture technology purely for a group of consumers without much buying power. A possible solution is therefore to identify mainstream technology that, with adaptation, could serve the purposes required by those with profound and multiple disabilities. Because of its ability to engage the attention of young children with autism, the role of a humanoid robot was investigated. After viewing a demonstration, teachers of pupils with profound and multiple disabilities described actions they wished the robot to make in order to help nominated pupils to achieve learning objectives. They proposed a much wider range of suggestions for using the robot than it could currently provide. Adaptations they required fell into two groups: either increasing the methods through which the robot could be controlled or increasing the range of behaviours that the robot emitted. These were met in a variety of ways but most would require a degree of programming expertise above that possessed by most schoolteachers

    Working with troubles and failures in conversation between humans and robots

    Get PDF
    In order to carry out human-robot collaborative tasks efficiently, robots have to be able to communicate with their human counterparts. In many applications, speech interfaces are deployed as a way to empower robots with the ability to communicate. Despite the progress made in speech recognition and (multi-modal) dialogue systems, such interfaces continue to be brittle in a number of ways and the experience of the failure of such interfaces is commonplace amongst roboticists. Surprisingly, a rigorous and complete analysis of communicative failures is still missing, and the technical literature is positively skewed towards the success and good performance of speech interfaces. In order to address this blind spot and investigate failures in conversations between humans and robots, an interdisciplinary effort is necessary. This workshop aims to raise awareness of said blind spot and provide a platform for discussing communicative troubles and failures in human-robot interactions and potentially related failures in non-robotic speech interfaces. We aim to bring together researchers studying communication in different fields, to start a scrupulous investigation into communicative failures, to begin working on a taxonomy of such failures, and enable a preliminary discussion on possible mitigating strategies. This workshop intends to be a venue where participants can freely discuss the failures they have encountered, to positively and constructively learn from them

    ALTCAI: Enabling the Use of Embodied Conversational Agents to Deliver Informal Health Advice during Wizard of Oz Studies

    Get PDF
    We present ALTCAI, a Wizard of Oz Embodied Conversational Agent that has been developed to explore the use of interactive agents as an effective and engaging tool for delivering health and well-being advice to expectant and nursing mothers in Nigeria. This paper briefly describes the motivation and context for its creation, ALTCAI’s various components, and presents a discussion on its adaptability and potential uses in other contexts, as well as on potential future work on extending its functionality

    Designing an adaptive embodied conversational agent for health literacy

    Get PDF
    ccess to healthcare advice is crucial to promote healthy societies. Many factors shape how access might be constrained, such as economic status, education or, as the COVID-19 pandemic has shown, remote consultations with health practitioners. Our work focuses on providing pre/post-natal advice to maternal women. A salient factor of our work concerns the design and deployment of embodied conversation agents (ECAs) which can sense the (health) literacy of users and adapt to scaffold user engagement in this setting. We present an account of a Wizard of Oz user study of 'ALTCAI', an ECA with three modes of interaction (i.e., adaptive speech and text, adaptive ECA, and non-adaptive ECA). We compare reported engagement with these modes from 44 maternal women who have differing levels of literacy. The study shows that a combination of embodiment and adaptivity scaffolds reported engagement, but matters of health-literacy and language introduce nuanced considerations for the design of ECAs

    Identifying interaction types and functionality for automated vehicle virtual assistants: An exploratory study using speech acts cluster analysis

    Get PDF
    Onboard virtual assistants with the ability to converse with users are gaining favour in supporting effective human-machine interaction to meet safe standards of operation in automated vehicles (AVs). Previous studies have highlighted the need to communicate situation information to effectively support the transfer of control and responsibility of the driving task. This study explores ‘interaction types’ used for this complex human-machine transaction, by analysing how situation information is conveyed and reciprocated during a transfer of control scenario. Two human drivers alternated control in a bespoke, dual controlled driving simulator with the transfer of control being entirely reliant on verbal communication. Handover dialogues were coded based on speech-act classifications, and a cluster analysis was conducted. Four interaction types were identified for both virtual assistants (i.e., agent handing over control) - Supervisor, Information Desk, Interrogator and Converser, and drivers (i.e., agent taking control) - Coordinator, Perceiver, Inquirer and Silent Receiver. Each interaction type provides a framework of characteristics that can be used to define driver requirements and implemented in the design of future virtual assistants to support the driver in maintaining and rebuilding timely situation awareness, whilst ensuring a positive user experience. This study also provides additional insight into the role of dialogue turns and takeover time and provides recommendations for future virtual assistant designs in AVs

    Design and evaluation of virtual human mediated tasks for assessment of depression and anxiety

    Get PDF
    Virtual human technologies are now being widely explored as therapy tools for mental health disorders including depression and anxiety. These technologies leverage the ability of the virtual agents to engage in naturalistic social interactions with a user to elicit behavioural expressions which are indicative of depression and anxiety. Research efforts have focused on optimising the human-like expressive capabilities of the virtual human, but less attention has been given to investigating the effect of virtual human mediation on the expressivity of the user. In addition, it is still not clear what an optimal task is or what task characteristics are likely to sustain long term user engagement. To this end, this paper describes the design and evaluation of virtual human-mediated tasks in a user study of 56 participants. Half the participants complete tasks guided by a virtual human, while the other half are guided by text on screen. Self-reported PHQ9 scores, biosignals and participants' ratings of tasks are collected. Findings show that virtual-human mediation influences behavioural expressiveness and this observation differs for different depression severity levels. It further shows that virtual human mediation improves users' disposition towards tasks

    Remote Operation of Robots via Mobile Devices to Help People with Intellectual Disabilities

    No full text
    Despite the rapid advances in technology relatively few are focused on special education, or the education for people with profound and multiple disabilities. One of the main causes for this to happen is that there is little incentive for companies to design and manufacture technology for a target group of customers without major buying power, or one that makes up a relatively small proportion of the population. Therefore, the most reasonable option is to take the technological tools that are already available, and adapt them to meet the requirements of those with profound and multiple disabilities. This paper presents the design and development of an application for mobile devices that will enable easy and intuitive use of the humanoid robot NAO (manufactured by Aldebaran Robotics) by people with profound and multiple disabilities, as well as by educators in the area of special education. This includes also the adaptation and development of several behaviours and modules for this specific robot

    Augmented Reality to Reduce Cognitive Load in Operational Decision-Making

    No full text
    Augmented reality (AR) technologies can overlay digital in- formation onto the real world. This makes them well suited for deci- sion support by providing contextually-relevant information to decision- makers. However, processing large amounts of information simultane- ously, particularly in time-pressured conditions, can result in poor decision- making due to excess cognitive load. This paper presents the results of an exploratory study investigating the effects of AR on cognitive load. A within-subjects experiment was conducted where participants were asked to complete a variable-sized bin packing task with and without the as- sistance of an augmented reality decision support system (AR DSS). Semi-structured interviews were conducted to elicit perceptions about the ease of the task with and without the AR DSS. This was supple- mented by collecting quantitative data to investigate if any changes in perceived ease of the task translated into changes in task performance. The qualitative data suggests that the presence of the AR DSS made the task feel easier to participants; however, there was only a statistically in- significant increase in mean task performance. Analysing the data at the individual level does not provide evidence of a translation of increased perceived ease to increased task performance
    corecore