32 research outputs found

    EFEKTIVITAS RIGGING PADA ASET KARAKTER ANIMASI 3D

    Get PDF
    Film animasi dan game saat ini banyak diminati oleh segala usia baik anak kecil hingga dewasa, mulai dari game konsol, game dengan segala genre hingga game PC atau mobile game. Film animasi juga demikian dari series hingga layer lebar. Asset animasi dan game merupakan hal yang penting untuk membuat game karena tanpa karakter atau environment maka game akan sulit dipahami jalan ceritanya. Developer game akan membuat satu karakter utama supaya user atau pemain dapat merasakan berada didalam game  dan bisa memainkannya dengan imajinasi sesuai dengan persepsi pembuatnya. Salah satu yang penting adalah karakter game untuk melakukan suatu tindskan atau misi yang dimainkan oleh user. Demikian pula dengan asset karakter pada animasi. Karakter inilah akan luwes dengan menggunakan rigging. Rigging merupakan tulang utama untuk karakter yang kemudian akan ditempelkan pada baju atau kulitnya dengan tahapan skinning, Tahapan ini akan membuat karakter bisa melakukan segala sesuatunya dan terlihat realistus. Rigging merupakan proses penting agar penggerakan animasi atau asset ini berjalan lebih ce pat dan menghemat waktu. Tidak hanya untuk game, rigging juga dirasa penting untuk animasi karena dengan rigging prinsip-prinsip animasi dapat dijalankan dengan baik

    AFFECT-PRESERVING VISUAL PRIVACY PROTECTION

    Get PDF
    The prevalence of wireless networks and the convenience of mobile cameras enable many new video applications other than security and entertainment. From behavioral diagnosis to wellness monitoring, cameras are increasing used for observations in various educational and medical settings. Videos collected for such applications are considered protected health information under privacy laws in many countries. Visual privacy protection techniques, such as blurring or object removal, can be used to mitigate privacy concern, but they also obliterate important visual cues of affect and social behaviors that are crucial for the target applications. In this dissertation, we propose to balance the privacy protection and the utility of the data by preserving the privacy-insensitive information, such as pose and expression, which is useful in many applications involving visual understanding. The Intellectual Merits of the dissertation include a novel framework for visual privacy protection by manipulating facial image and body shape of individuals, which: (1) is able to conceal the identity of individuals; (2) provide a way to preserve the utility of the data, such as expression and pose information; (3) balance the utility of the data and capacity of the privacy protection. The Broader Impacts of the dissertation focus on the significance of privacy protection on visual data, and the inadequacy of current privacy enhancing technologies in preserving affect and behavioral attributes of the visual content, which are highly useful for behavior observation in educational and medical settings. This work in this dissertation represents one of the first attempts in achieving both goals simultaneously

    Real-Time Face Feature Reshaping Without Cosmetic Surgery

    Get PDF
    In the contemporary world today computer vision applica- tions make use of 4G technology and high-definition HD video calling on mobile phones People frequently utilize 4G video calling to commu- nicate with friends and family The technology is capable of projecting minute elements from the real world such as background facial features and behavior among other things We developed a video processing sys- tem that lets users alter the shape and look of facial features such as the brows eyes nose lip jaw and chi

    HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION

    Get PDF
    Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people

    The Rocketbox Library and the Utility of Freely Available Rigged Avatars

    Get PDF
    As part of the open sourcing of the Microsoft Rocketbox avatar library for research and academic purposes, here we discuss the importance of rigged avatars for the Virtual and Augmented Reality (VR, AR) research community. Avatars, virtual representations of humans, are widely used in VR applications. Furthermore many research areas ranging from crowd simulation to neuroscience, psychology, or sociology have used avatars to investigate new theories or to demonstrate how they influence human performance and interactions. We divide this paper in two main parts: the first one gives an overview of the different methods available to create and animate avatars. We cover the current main alternatives for face and body animation as well introduce upcoming capture methods. The second part presents the scientific evidence of the utility of using rigged avatars for embodiment but also for applications such as crowd simulation and entertainment. All in all this paper attempts to convey why rigged avatars will be key to the future of VR and its wide adoption

    Realtime reconstruction of an animating human body from a single depth camera

    Get PDF
    We present a method for realtime reconstruction of an animating human body, which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model. Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers. In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database, reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem; carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity
    corecore