4,373 research outputs found

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Deploying the NASA Valkyrie Humanoid for IED Response: An Initial Approach and Evaluation Summary

    Full text link
    As part of a feasibility study, this paper shows the NASA Valkyrie humanoid robot performing an end-to-end improvised explosive device (IED) response task. To demonstrate and evaluate robot capabilities, sub-tasks highlight different locomotion, manipulation, and perception requirements: traversing uneven terrain, passing through a narrow passageway, opening a car door, retrieving a suspected IED, and securing the IED in a total containment vessel (TCV). For each sub-task, a description of the technical approach and the hidden challenges that were overcome during development are presented. The discussion of results, which explicitly includes existing limitations, is aimed at motivating continued research and development to enable practical deployment of humanoid robots for IED response. For instance, the data shows that operator pauses contribute to 50\% of the total completion time, which implies that further work is needed on user interfaces for increasing task completion efficiency.Comment: 2019 IEEE-RAS International Conference on Humanoid Robot

    Reactive Video:Adaptive Video Playback Based on User Motion for Supporting Physical Activity

    Get PDF
    Videos are a convenient platform to begin, maintain, or improve a ftness program or physical activity. Traditional video systems allow users to manipulate videos through specifc user interface actions such as button clicks or mouse drags, but have no model of what the user is doing and are unable to adapt in useful ways. We present adaptive video playback, which seamlessly synchronises video playback with the user’s movements, building upon the principle of direct manipulation video navigation. We implement adaptive video playback in Reactive Video, a vision-based system which supports users learning or practising a physical skill. The use of pre-existing videos removes the need to create bespoke content or specially authored videos, and the system can provide real-time guidance and feedback to better support users when learning new movements. Adaptive video playback using a discrete Bayes and particle flter are evaluated on a data set collected of participants performing tai chi and radio exercises. Results show that both approaches can accurately adapt to the user’s movements, however reversing playback can be problematic

    Multimodality for comprehensive communication in the classroom: Questions in guest lectures

    Get PDF
    In recent years there have been many studies about the discourse of lectures (Pérez-Llantada & Ferguson, 2006; Csomay, 2007; Deroey & Taverniers, 2011). Lecturing is the most common speech event in most university classrooms in the world. Bamford (2005) defines lectures’ styles as conversational, stressing the interactive nature of the lecture, the main goal of which is to establish contact with the students, and the co-option of the students into a discourse community. However, most of the studies published up to this moment have focused exclusively on the language used by the lecturer and little attention has been paid to the role of multimodality in this particular genre. In our research, we try to identify the non-verbal behaviour that can be of special relevance for the comprehensive communication in the classroom, focusing on questions in two guest lectures in English delivered for a group of Spanish students. Results indicate that both lecturers use different verbal and non-verbal strategies to foster interaction, adapting to the characteristics of their audience. The final objective of this study is twofold: i) to use the results in our courses for training Spanish lecturers on teaching in English; and ii) to use these results for EAP undergraduate courses, as it has been observed that body language needs awareness raising in order to facilitate transfer from mother tongue to another language

    A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness

    Full text link
    People increasingly use videos on the Web as a source for learning. To support this way of learning, researchers and developers are continuously developing tools, proposing guidelines, analyzing data, and conducting experiments. However, it is still not clear what characteristics a video should have to be an effective learning medium. In this paper, we present a comprehensive review of 257 articles on video-based learning for the period from 2016 to 2021. One of the aims of the review is to identify the video characteristics that have been explored by previous work. Based on our analysis, we suggest a taxonomy which organizes the video characteristics and contextual aspects into eight categories: (1) audio features, (2) visual features, (3) textual features, (4) instructor behavior, (5) learners activities, (6) interactive features (quizzes, etc.), (7) production style, and (8) instructional design. Also, we identify four representative research directions: (1) proposals of tools to support video-based learning, (2) studies with controlled experiments, (3) data analysis studies, and (4) proposals of design guidelines for learning videos. We find that the most explored characteristics are textual features followed by visual features, learner activities, and interactive features. Text of transcripts, video frames, and images (figures and illustrations) are most frequently used by tools that support learning through videos. The learner activity is heavily explored through log files in data analysis studies, and interactive features have been frequently scrutinized in controlled experiments. We complement our review by contrasting research findings that investigate the impact of video characteristics on the learning effectiveness, report on tasks and technologies used to develop tools that support learning, and summarize trends of design guidelines to produce learning video

    Digital technologies for innovative mental health rehabilitation

    Get PDF
    Schizophrenia is a chronic mental illness, characterized by the loss of the notion of reality, failing to distinguish it from the imaginary. It affects the patient in life’s major areas, such as work, interpersonal relationships, or self-care, and the usual treatment is performed with the help of anti- psychotic medication, which targets primarily the hallucinations, delirium, etc. Other symptoms, such as the decreased emotional expression or avolition, require a multidisciplinary approach, including psychopharmacology, cognitive training, and many forms of therapy. In this context, this paper addresses the use of digital technologies to design and develop innovative rehabilitation techniques, particularly focusing on mental health rehabilitation, and contributing for the promotion of well-being and health from a holistic perspective. In this context, serious games and virtual reality allows for creation of immersive environments that contribute to a more effective and lasting recovery, with improvements in terms of quality of life. The use of machine learning techniques will allow the real-time analysis of the data collected during the execution of the rehabilitation procedures, as well as enable their dynamic and automatic adaptation according to the profile and performance of the patients, by increasing or reducing the exercises’ difficulty. It relies on the acquisition of biometric and physiological signals, such as voice, heart rate, and game performance, to estimate the stress level, thus adapting the difficulty of the experience to the skills of the patient. The system described in this paper is currently in development, in collaboration with a health unit, and is an engineering effort that combines hardware and software to develop a rehabilitation tool for schizophrenic patients. A clinical trial is also planned for assessing the effectiveness of the system among negative symptoms in schizophrenia patients.This work is funded by the European Regional Development Fund (ERDF) through the Regional Operational Program North 2020, within the scope of Project GreenHealth - Digital strategies in biological assets to improve well-being and promote green health, Norte-01-0145-FEDER-000042.info:eu-repo/semantics/publishedVersio

    An assistive robot to support dressing-strategies for planning and error handling

    Get PDF
    © 2016 IEEE. Assistive robots are emerging to address a social need due to changing demographic trends such as an ageing population. The main emphasis is to offer independence to those in need and to fill a potential labour gap in response to the increasing demand for caregiving. This paper presents work undertaken as part of a dressing task using a compliant robotic arm on a mannequin. Several strategies are explored on how to undertake this task with minimal complexity and a mix of sensors. A Vicon tracking system is used to determine the arm position of the mannequin for trajectory planning by means of waypoints. Methods of failure detection were explored through torque feedback and sensor tag data. A fixed vocabulary of recognised speech commands was implemented allowing the user to successfully correct detected dressing errors. This work indicates that low cost sensors and simple HRI strategies, without complex learning algorithms, could be used successfully in a robot assisted dressing task
    • …
    corecore