7,435 research outputs found

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time

    Conceptual design study for an advanced cab and visual system, volume 2

    Get PDF
    The performance, design, construction and testing requirements are defined for developing an advanced cab and visual system. The rotorcraft system integration simulator is composed of the advanced cab and visual system and the rotorcraft system motion generator, and is part of an existing simulation facility. User's applications for the simulator include rotorcraft design development, product improvement, threat assessment, and accident investigation

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Virtual Try-On With Generative Adversarial Networks: A Taxonomical Survey

    Get PDF
    This chapter elaborates on using generative adversarial networks (GAN) for virtual try-on applications. It presents the first comprehensive survey on this topic. Virtual try-on represents a practical application of GANs and pixel translation, which improves on the techniques of virtual try-on prior to these new discoveries. This survey details the importance of virtual try-on systems and the history of virtual try-on; shows how GANs, pixel translation, and perceptual losses have influenced the field; and summarizes the latest research in creating virtual try-on systems. Additionally, the authors present the future directions of research to improve virtual try-on systems by making them usable, faster, more effective. By walking through the steps of virtual try-on from start to finish, the chapter aims to expose readers to key concepts shared by many GAN applications and to give readers a solid foundation to pursue further topics in GANs
    • …
    corecore