2,406 research outputs found

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time

    CGAMES'2009

    Get PDF

    How to Build an Embodiment Lab: Achieving Body Representation Illusions in Virtual Reality

    Get PDF
    Advances in computer graphics algorithms and virtual reality (VR) systems, together with the reduction in cost of associated equipment, have led scientists to consider VR as a useful tool for conducting experimental studies in fields such as neuroscience and experimental psychology. In particular virtual body ownership, where the feeling of ownership over a virtual body is elicited in the participant, has become a useful tool in the study of body representation, in cognitive neuroscience and psychology, concerned with how the brain represents the body. Although VR has been shown to be a useful tool for exploring body ownership illusions, integrating the various technologies necessary for such a system can be daunting. In this paper we discuss the technical infrastructure necessary to achieve virtual embodiment. We describe a basic VR system and how it may be used for this purpose, and then extend this system with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings

    GUI3DXBot: Una herramienta software interactiva para un robot móvil guía

    Get PDF
    Nowadays, mobile robots begin to appear in public places. To do these tasks properly, mobile robots must interact with humans. This paper presents the development of GUI3DXBot, a software tool for a tour-guide mobile robot. The paper focuses on the development of different software modules needed to guide users in an office building. In this context, GUI3DXBot is a server-client application, where the server side runs into the robot, and the client side runs into a 10-inch Android tablet. The GUI3DXBot server side is in charge of performing the perception, localization-mapping, and path planning tasks. The GUI3DXBot client side implements the human-robot interface that allows users requesting-canceling a tour-guide service, showing robot localization in the map, interacting with users, and tele-operating the robot in case of emergency. The contributions of this paper are twofold: it proposes a software modules design to guide users in an office building, and the whole robot system was well integrated and fully tested. GUI3DXBot were tested using software integration and field tests. The field tests were performed over a two-week period, and a survey to users was conducted. The survey results show that users think GUI3DXBot is friendly and intuitive, the goal selection was very easy, the interactive messages were very easy to understand, 90% of users found useful the robot icon on the map, users found useful drawing the path on the map, 90% of users found useful the local-global map view, and the guidance experience was very satisfactory (70%) and satisfactory (30%).Actualmente, los robots móviles inician a aparecer en lugares públicos. Para realizar estas tareas adecuadamente, los robots móviles deben interactuar con humanos. Este artículo presenta GUI3DXBot, un aplicativo para un robot móvil guía. Este artículo se enfoca en el desarrollo de los diferentes módulos software necesarios para guiar a usuarios en un edificio de oficinas. GUI3DXBot es una aplicación cliente-servidor, donde el lado del servidor se ejecuta en el robot, y el lado del cliente se ejecuta en una tableta de 10 pulgadas Android. El lado servidor de GUI3DXBot está a cargo de la percepción, localización-mapeo y planificación de rutas. El lado cliente de GUI3DXBot implementa la interfaz humano-robot que permite a los usuarios solicitar-cancelar un servicio de guía, mostrar la localización del robot en el mapa, interactuar con los usuarios, y tele-operar el robot en caso de emergencia. Las contribuciones de este artículo son dos: se propone un diseño de módulos software para guiar a usuarios en un edificio de oficinas, y que todo el sistema robótico está bien integrado y completamente probado. GUI3DXBot fue validada usando pruebas de integración y de campo. Las pruebas de campo fueron realizadas en un periodo de 2 semanas, y una encuesta a los usuarios fue llevada a cabo. Los resultados de la encuesta mostraron que los usuarios piensan que GUI3DXBot es amigable e intuitiva, la selección de metas fue fácil, pudieron entender los mensajes de interacción, 90% de los usuarios encontraron útil el ícono del robot sobre el mapa, encontraron útil dibujar la ruta planeada en el mapa, 90% de los usuarios encontraron útil la vista local-global del mapa, y la experiencia de guía fue muy satisfactoria (70%) y satisfactoria (30%)

    User Training with Error Augmentation for Electromyogram-based Gesture Classification

    Full text link
    We designed and tested a system for real-time control of a user interface by extracting surface electromyographic (sEMG) activity from eight electrodes in a wrist-band configuration. sEMG data were streamed into a machine-learning algorithm that classified hand gestures in real-time. After an initial model calibration, participants were presented with one of three types of feedback during a human-learning stage: veridical feedback, in which predicted probabilities from the gesture classification algorithm were displayed without alteration, modified feedback, in which we applied a hidden augmentation of error to these probabilities, and no feedback. User performance was then evaluated in a series of minigames, in which subjects were required to use eight gestures to manipulate their game avatar to complete a task. Experimental results indicated that, relative to baseline, the modified feedback condition led to significantly improved accuracy and improved gesture class separation. These findings suggest that real-time feedback in a gamified user interface with manipulation of feedback may enable intuitive, rapid, and accurate task acquisition for sEMG-based gesture recognition applications.Comment: 10 pages, 10 figure

    Flexible Virtual Reality System for Neurorehabilitation and Quality of Life Improvement

    Full text link
    As life expectancy is mostly increasing, the incidence of many neurological disorders is also constantly growing. For improving the physical functions affected by a neurological disorder, rehabilitation procedures are mandatory, and they must be performed regularly. Unfortunately, neurorehabilitation procedures have disadvantages in terms of costs, accessibility and a lack of therapists. This paper presents Immersive Neurorehabilitation Exercises Using Virtual Reality (INREX-VR), our innovative immersive neurorehabilitation system using virtual reality. The system is based on a thorough research methodology and is able to capture real-time user movements and evaluate joint mobility for both upper and lower limbs, record training sessions and save electromyography data. The use of the first-person perspective increases immersion, and the joint range of motion is calculated with the help of both the HTC Vive system and inverse kinematics principles applied on skeleton rigs. Tutorial exercises are demonstrated by a virtual therapist, as they were recorded with real-life physicians, and sessions can be monitored and configured through tele-medicine. Complex movements are practiced in gamified settings, encouraging self-improvement and competition. Finally, we proposed a training plan and preliminary tests which show promising results in terms of accuracy and user feedback. As future developments, we plan to improve the system's accuracy and investigate a wireless alternative based on neural networks.Comment: 47 pages, 20 figures, 17 tables (including annexes), part of the MDPI Sesnsors "Special Issue Smart Sensors and Measurements Methods for Quality of Life and Ambient Assisted Living

    Bacteria Hunt: A multimodal, multiparadigm BCI game

    Get PDF
    Brain-Computer Interfaces (BCIs) allow users to control applications by brain activity. Among their possible applications for non-disabled people, games are promising candidates. BCIs can enrich game play by the mental and affective state information they contain. During the eNTERFACE’09 workshop we developed the Bacteria Hunt game which can be played by keyboard and BCI, using SSVEP and relative alpha power. We conducted experiments in order to investigate what difference positive vs. negative neurofeedback would have on subjects’ relaxation states and how well the different BCI paradigms can be used together. We observed no significant difference in mean alpha band power, thus relaxation, and in user experience between the games applying positive and negative feedback. We also found that alpha power before SSVEP stimulation was significantly higher than alpha power during SSVEP stimulation indicating that there is some interference between the two BCI paradigms
    corecore