113 research outputs found

    Unsupervised Training for 3D Morphable Model Regression

    Full text link
    We present a method for training a regression network from image pixels to 3D morphable model coordinates using only unlabeled photographs. The training loss is based on features from a facial recognition network, computed on-the-fly by rendering the predicted faces with a differentiable renderer. To make training from features feasible and avoid network fooling effects, we introduce three objectives: a batch distribution loss that encourages the output distribution to match the distribution of the morphable model, a loopback loss that ensures the network can correctly reinterpret its own output, and a multi-view identity loss that compares the features of the predicted 3D face and the input photograph from multiple viewing angles. We train a regression network using these objectives, a set of unlabeled photographs, and the morphable model itself, and demonstrate state-of-the-art results.Comment: CVPR 2018 version with supplemental material (http://openaccess.thecvf.com/content_cvpr_2018/html/Genova_Unsupervised_Training_for_CVPR_2018_paper.html

    Authoring virtual crowds: a survey

    Get PDF
    Recent advancements in crowd simulation unravel a wide range of functionalities for virtual agents, delivering highly-realistic,natural virtual crowds. Such systems are of particular importance to a variety of applications in fields such as: entertainment(e.g., movies, computer games); architectural and urban planning; and simulations for sports and training. However, providingtheir capabilities to untrained users necessitates the development of authoring frameworks. Authoring virtual crowds is acomplex and multi-level task, varying from assuming control and assisting users to realise their creative intents, to deliveringintuitive and easy to use interfaces, facilitating such control. In this paper, we present a categorisation of the authorable crowdsimulation components, ranging from high-level behaviours and path-planning to local movements, as well as animation andvisualisation. We provide a review of the most relevant methods in each area, emphasising the amount and nature of influencethat the users have over the final result. Moreover, we discuss the currently available authoring tools (e.g., graphical userinterfaces, drag-and-drop), identifying the trends of early and recent work. Finally, we suggest promising directions for futureresearch that mainly stem from the rise of learning-based methods, and the need for a unified authoring framework.This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska Curie grant agreement No 860768 (CLIPE project). This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 739578 and the Government of the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital PolicyPeer ReviewedPostprint (author's final draft

    Real time physics-based augmented fitting room using time-of-flight cameras

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013.Thesis (Master's) -- Bilkent University, 2013.Includes bibliographical references leaves 63-72.This thesis proposes a framework for a real-time physically-based augmented cloth tting environment. The required 3D meshes for the human avatar and apparels are modeled with speci c constraints. The models are then animated in real-time using input from a user tracked by a depth sensor. A set of motion lters are introduced in order to improve the quality of the simulation. The physical e ects such as inertia, external and forces and collision are imposed on the apparel meshes. The avatar and the apparels can be customized according to the user. The system runs in real-time on a high-end consumer PC with realistic rendering results.Gültepe, UmutM.S
    • …
    corecore