2,051 research outputs found

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time

    Video-Based Character Animation

    Full text link

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    Watching Androids Dream of Electric Sheep: Immersive Technology, Biometric Psychography, and the Law

    Get PDF
    Virtual reality and augmented reality present exceedingly complex privacy issues because of the enhanced user experience and reality-based models. Unlike the issues presented by traditional gaming and social media, immersive technology poses inherent risks, which our legal understanding of biometrics and online harassment is simply not prepared to address. This Article offers five important contributions to this emerging space. It begins by introducing a new area of legal and policy inquiry raised by immersive technology called “biometric psychography.” Second, it explains how immersive technology works to a legal audience and defines concepts that are essential to understanding the risks that the technology poses. Third, it analyzes the gaps in privacy law to address biometric psychography and other emerging challenges raised by immersive technology that most regulators and consumers incorrectly assume will be governed by existing law. Fourth, this Article sources firsthand interviews from early innovators and leading thinkers to highlight harassment and user experience risks posed by immersive technology. Finally, this Article compiles insights from each of these discussions to propose a framework that integrates privacy and human rights into the development of future immersive tech applications. It applies that framework to three specific scenarios and demonstrates how it can help navigate challenges, both old and new
    corecore