38,582 research outputs found

    Enhanced dynamic reflectometry for relightable free-viewpoint video

    No full text
    Free-Viewpoint Video of Human Actors allows photo- realistic rendering of real-world people under novel viewing conditions. Dynamic Reflectometry extends the concept of free-view point video and allows rendering in addition under novel lighting conditions. In this work, we present an enhanced method for capturing human shape and motion as well as dynamic surface reflectance properties from a sparse set of input video streams. We augment our initial method for model-based relightable free-viewpoint video in several ways. Firstly, a single-skin mesh is introduced for the continuous appearance of the model. Moreover an algorithm to detect and compensate lateral shifting of textiles in order to improve temporal texture registration is presented. Finally, a structured resampling approach is introduced which enables reliable estimation of spatially varying surface reflectance despite a static recording setup. The new algorithm ingredients along with the Relightable 3D Video framework enables us to realistically reproduce the appearance of animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution can be used to create 3D renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time

    Pembuatan Video Berita Di Wartapakwan

    Get PDF
    Abstract : Abstract - rapidly growing development of information, many newspapers began to rise not only through print media but also through the internet in other words published online. But the problem also requires variations to see news. Hence the application of multimedia technologies such as video-based information is needed a new newspaper websites Wartapakwan to make tv news production that attract people to update the news on its website. Formulation of the problem of this research is to make a video news How to provide more interesting information with multimedia technology? Benefits of video making news in this study is to provide an interesting shape news coverage videos www.wartapakwan.com accessible to the public. The method used in this research consisted of literature, observations, interviews, analysis, design, picture taking and capturing, editing, rendering, testing, and implementation. Keywords: tv news production, multimedia. Abstrak – perkembangan informasi berkembang secara cepat, koran – koran mulai terbit tidak hanya melalui media cetak tetapi juga melalui media internet dengan kata lain diterbitkan secara online. Tetapi masalahnya masyarakat juga membutuhkan variasi untuk melihat berita Maka dari itu penerapan teknologi informasi berbasis multimedia seperti video sangat diperlukan suatu situs surat kabar baru Wartapakwan untuk melakukan produksi berita tv supaya menarik minat masyarakat untuk mengupdate berita di situsnya. Rumusan masalah dari penelitian ini adalah Bagaimana membuat video berita agar dapat memberikan informasi yang lebih menarik dengan teknologi multimedia ? Manfaat pembuatan video berita dalam penelitian ini adalah untuk memberikan liputan berita yang menarik berbentuk video yang dapat diakses di www.wartapakwan.com kepada masyarakat luas. Metode penelitian yang digunakan terdiri dari kepustakaan, observasi, wawancara, analisis, perancangan, pengambilan gambar dan capturing, editing, rendering, uji coba, dan implementasi. Kata Kunci : produksi berita tv, multimedi

    A multi-modal dance corpus for research into real-time interaction between humans in online virtual environments

    Get PDF
    We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology are locally available to them, can learn choerographies with teacher guidance in an online virtual ballet studio. As the data corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers perform a number of fixed choreographies, which are both graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. Although the data corpus is tailored specifically for an online dance class application scenario, the data is free to download and used for any research and development purposes
    corecore