1,114 research outputs found

    Evaluation of affective state estimations using an on-line reporting device during human-robot interactions

    Full text link

    Defining, measuring, and modeling passenger's in-vehicle experience and acceptance of automated vehicles

    Full text link
    Automated vehicle acceptance (AVA) has been measured mostly subjectively by questionnaires and interviews, with a main focus on drivers inside automated vehicles (AVs). To ensure that AVs are widely accepted by the public, ensuring the acceptance by both drivers and passengers is key. The in-vehicle experience of passengers will determine the extent to which AVs will be accepted by passengers. A comprehensive understanding of potential assessment methods to measure the passenger experience in AVs is needed to improve the in-vehicle experience of passengers and thereby the acceptance. The present work provides an overview of assessment methods that were used to measure a driver's behavior, and cognitive and emotional states during (automated) driving. The results of the review have shown that these assessment methods can be classified by type of data-collection method (e.g., questionnaires, interviews, direct input devices, sensors), object of their measurement (i.e., perception, behavior, state), time of measurement, and degree of objectivity of the data collected. A conceptual model synthesizes the results of the literature review, formulating relationships between the factors constituting the in-vehicle experience and AVA acceptance. It is theorized that the in-vehicle experience influences the intention to use, with intention to use serving as predictor of actual use. The model also formulates relationships between actual use and well-being. A combined approach of using both subjective and objective assessment methods is needed to provide more accurate estimates for AVA, and advance the uptake and use of AVs.Comment: 22 pages, 1 figur

    A Physiological Computing System to Improve Human-Robot Collaboration by Using Human Comfort Index

    Get PDF
    Fluent human-robot collaboration requires a robot teammate to understand, learn, and adapt to the human\u27s psycho-physiological state. Such collaborations require a physiological computing system that monitors human biological signals during human-robot collaboration (HRC) to quantitatively estimate a human\u27s level of comfort, which we have termed in this research as comfortability index (CI) and uncomfortability index (UnCI). We proposed a human comfort index estimation system (CIES) that uses biological signals and subjective metrics. Subjective metrics (surprise, anxiety, boredom, calmness, and comfortability) and physiological signals were collected during a human-robot collaboration experiment that varied the robot\u27s behavior. The emotion circumplex model is adapted to calculate the CI from the participant\u27s quantitative data as well as physiological data. This thesis developed a physiological computing system that estimates human comfort levels from physiological by using the circumplex model approach. The data was collected from multiple experiments and machine learning models trained, and their performance was evaluated. As a result, a subject-independent model was tested to determine the robot behavior based on human comfort level. The results from multiple experiments indicate that the proposed CIES model improves human comfort by providing feedback to the robot. In conclusion, physiological signals can be used for personalized robots, and it has the potential to improve safety for humans and increase the fluency of collaboration

    Confirmation Report: Modelling Interlocutor Confusion in Situated Human Robot Interaction

    Get PDF
    Human-Robot Interaction (HRI) is an important but challenging field focused on improving the interaction between humans and robots such to make the interaction more intelligent and effective. However, building a natural conversational HRI is an interdisciplinary challenge for scholars, engineers, and designers. It is generally assumed that the pinnacle of human- robot interaction will be having fluid naturalistic conversational interaction that in important ways mimics that of how humans interact with each other. This of course is challenging at a number of levels, and in particular there are considerable difficulties when it comes to naturally monitoring and responding to the user’s mental state. On the topic of mental states, one field that has received little attention to date is moni- toring the user for possible confusion states. Confusion is a non-trivial mental state which can be seen as having at least two substates. There two confusion states can be thought of as being associated with either negative or positive emotions. In the former, when people are productively confused, they have a passion to solve any current difficulties. Meanwhile, people who are in unproductive confusion may lose their engagement and motivation to overcome those difficulties, which in turn may even lead them to drop the current conversation. While there has been some research on confusion monitoring and detection, it has been limited with the most focused on evaluating confusion states in online learning tasks. The central hypothesis of this research is that the monitoring and detection of confusion states in users is essential to fluid task-centric HRI and that it should be possible to detect such confusion and adjust policies to mitigate the confusion in users. In this report, I expand on this hypothesis and set out several research questions. I also provide a comprehensive literature review before outlining work done to date towards my research hypothesis, I also set out plans for future experimental work

    Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

    Get PDF
    abstract: Parents fulfill a pivotal role in early childhood development of social and communication skills. In children with autism, the development of these skills can be delayed. Applied behavioral analysis (ABA) techniques have been created to aid in skill acquisition. Among these, pivotal response treatment (PRT) has been empirically shown to foster improvements. Research into PRT implementation has also shown that parents can be trained to be effective interventionists for their children. The current difficulty in PRT training is how to disseminate training to parents who need it, and how to support and motivate practitioners after training. Evaluation of the parents’ fidelity to implementation is often undertaken using video probes that depict the dyadic interaction occurring between the parent and the child during PRT sessions. These videos are time consuming for clinicians to process, and often result in only minimal feedback for the parents. Current trends in technology could be utilized to alleviate the manual cost of extracting data from the videos, affording greater opportunities for providing clinician created feedback as well as automated assessments. The naturalistic context of the video probes along with the dependence on ubiquitous recording devices creates a difficult scenario for classification tasks. The domain of the PRT video probes can be expected to have high levels of both aleatory and epistemic uncertainty. Addressing these challenges requires examination of the multimodal data along with implementation and evaluation of classification algorithms. This is explored through the use of a new dataset of PRT videos. The relationship between the parent and the clinician is important. The clinician can provide support and help build self-efficacy in addition to providing knowledge and modeling of treatment procedures. Facilitating this relationship along with automated feedback not only provides the opportunity to present expert feedback to the parent, but also allows the clinician to aid in personalizing the classification models. By utilizing a human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the classification models by providing additional labeled samples. This will allow the system to improve classification and provides a person-centered approach to extracting multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Attention and Social Cognition in Virtual Reality:The effect of engagement mode and character eye-gaze

    Get PDF
    Technical developments in virtual humans are manifest in modern character design. Specifically, eye gaze offers a significant aspect of such design. There is need to consider the contribution of participant control of engagement. In the current study, we manipulated participants’ engagement with an interactive virtual reality narrative called Coffee without Words. Participants sat over coffee opposite a character in a virtual café, where they waited for their bus to be repaired. We manipulated character eye-contact with the participant. For half the participants in each condition, the character made no eye-contact for the duration of the story. For the other half, the character responded to participant eye-gaze by making and holding eye contact in return. To explore how participant engagement interacted with this manipulation, half the participants in each condition were instructed to appraise their experience as an artefact (i.e., drawing attention to technical features), while the other half were introduced to the fictional character, the narrative, and the setting as though they were real. This study allowed us to explore the contributions of character features (interactivity through eye-gaze) and cognition (attention/engagement) to the participants’ perception of realism, feelings of presence, time duration, and the extent to which they engaged with the character and represented their mental states (Theory of Mind). Importantly it does so using a highly controlled yet ecologically valid virtual experience

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    The development of a human-robot interface for industrial collaborative system

    Get PDF
    Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot

    Chapter From the Lab to the Real World: Affect Recognition Using Multiple Cues and Modalities

    Get PDF
    Interdisciplinary concept of dissipative soliton is unfolded in connection with ultrafast fibre lasers. The different mode-locking techniques as well as experimental realizations of dissipative soliton fibre lasers are surveyed briefly with an emphasis on their energy scalability. Basic topics of the dissipative soliton theory are elucidated in connection with concepts of energy scalability and stability. It is shown that the parametric space of dissipative soliton has reduced dimension and comparatively simple structure that simplifies the analysis and optimization of ultrafast fibre lasers. The main destabilization scenarios are described and the limits of energy scalability are connected with impact of optical turbulence and stimulated Raman scattering. The fast and slow dynamics of vector dissipative solitons are exposed
    • …
    corecore