667 research outputs found

    Image-Based Virtual Clothing

    Get PDF
    Online shopping has found speedy growth for the fast-paced world in the present situation. Precisely garments shopping are one of the most exciting parts especially for ladies. The continuous changing fashion and newly designed outfits motivates customers for shopping. New online shopping stores have added an ease for shopping your desired products by removing the constraints of places and time limits. As far as garments are concerned, predicting the appropriate size and imaging the real life look of that garment just by viewing its image is a challenging task. The project introduces an easy and feasible solution for the online shopping try-on scenario by introducing an app with a digital try-on feature. It can enhance online shopping experience. In this project, we propose an idea for fitting a given 3D garment model on a person. We will use  3d models of the clothes that will fit on the image of the user and enable a user to see himself/herself wearing virtual clothes. The 3D models of the clothes are stored in the system. On opening the application, user can view the clothes available and by using the mobile’s camera the user can get an idea of how the garment will fit on him/her. This way the user can have a fair idea about the look of the garment

    Fully Automatic Multi-Object Articulated Motion Tracking

    Get PDF
    Fully automatic tracking of articulated motion in real-time with a monocular RGB camera is a challenging problem which is essential for many virtual reality (VR) and human-computer interaction applications. In this paper, we present an algorithm for multiple articulated objects tracking based on monocular RGB image sequence. Our algorithm can be directly employed in practical applications as it is fully automatic, real-time, and temporally stable. It consists of the following stages: dynamic objects counting, objects specific 3D skeletons generation, initial 3D poses estimation, and 3D skeleton fitting which fits each 3D skeleton to the corresponding 2D body-parts locations. In the skeleton fitting stage, the 3D pose of every object is estimated by maximizing an objective function that combines a skeleton fitting term with motion and pose priors. To illustrate the importance of our algorithm for practical applications, we present competitive results for real-time tracking of multiple humans. Our algorithm detects objects that enter or leave the scene, and dynamically generates or deletes their 3D skeletons. This makes our monocular RGB method optimal for real-time applications. We show that our algorithm is applicable for tracking multiple objects in outdoor scenes, community videos, and low-quality videos captured with mobile-phone cameras. Keywords: Multi-object motion tracking, Articulated motion capture, Deep learning, Anthropometric data, 3D pose estimation. DOI: 10.7176/CEIS/12-1-01 Publication date: March 31st 202

    Recreating Daily life in Pompeii

    Full text link
    [EN] We propose an integrated Mixed Reality methodology for recreating ancient daily life that features realistic simulations of animated virtual human actors (clothes, body, skin, face) who augment real environments and re-enact staged storytelling dramas. We aim to go further from traditional concepts of static cultural artifacts or rigid geometrical and 2D textual augmentations and allow for 3D, interactive, augmented historical character-based event representations in a mobile and wearable setup. This is the main contribution of the described work as well as the proposed extensions to AR Enabling technologies: a VR/AR character simulation kernel framework with real-time, clothed virtual humans that are dynamically superimposed on live camera input, animated and acting based on a predefined, historically correct scenario. We demonstrate such a real-time case study on the actual site of ancient Pompeii.The work presented has been supported by the Swiss Federal Office for Education and Science and the EU IST programme, in frame of the EU IST LIFEPLUS 34545 and EU ICT INTERMEDIA 38417 projects.Magnenat-Thalmann, N.; Papagiannakis, G. (2010). Recreating Daily life in Pompeii. Virtual Archaeology Review. 1(2):19-23. https://doi.org/10.4995/var.2010.4679OJS192312P. MILGRAM, F. KISHINO, (1994) "A Taxonomy of Mixed Reality Visual Displays", IEICE Trans. Information Systems, vol. E77-D, no. 12, pp. 1321-1329R. AZUMA, Y. BAILLOT, R. BEHRINGER, S. FEINER, S. JULIER, B. MACINTYRE, (2001) "Recent Advances in Augmented Reality", IEEE Computer Graphics and Applications, November/December http://dx.doi.org/10.1109/38.963459D. STRICKER, P. DĂ„HNE, F. SEIBERT, I. CHRISTOU, L. ALMEIDA, N. IOANNIDIS, (2001) "Design and Development Issues for ARCHEOGUIDE: An Augmented Reality-based Cultural Heritage On-site Guide", EuroImage ICAV 3D Conference in Augmented Virtual Environments and Three-dimensional Imaging, Mykonos, Greece, 30 May-01 JuneW. WOHLGEMUTH, G. TRIEBFĂśRST, (2000)"ARVIKA: augmented reality for development, production and service", DARE 2000 on Designing augmented reality environments, Elsinore, Denmark http://dx.doi.org/10.1145/354666.354688H. TAMURA, H. YAMAMOTO, A. KATAYAMA, (2001) "Mixed reality: Future dreams seen at the border between real and virtual worlds", Computer Graphics and Applications, vol.21, no.6, pp.64-70 http://dx.doi.org/10.1109/38.963462M. PONDER, G. PAPAGIANNAKIS, T. MOLET, N. MAGNENAT-THALMANN, D. THALMANN, (2003) "VHD++ Development Framework: Towards Extendible, Component Based VR/AR Simulation Engine Featuring Advanced Virtual Character Technologies", IEEE Computer Society Press, CGI Proceedings, pp. 96-104 http://dx.doi.org/10.1109/cgi.2003.1214453Archaeological Superintendence of Pompeii (2009), http://www.pompeiisites.orgG. PAPAGIANNAKIS, S. SCHERTENLEIB, B. O'KENNEDY , M. POIZAT, N.MAGNENAT-THALMANN, A. STODDART, D.THALMANN, (2005) "Mixing Virtual and Real scenes in the site of ancient Pompeii",Journal of CAVW, p 11-24, Volume 16, Issue 1, John Wiley and Sons Ltd, FebruaryEGGES, A., PAPAGIANNAKIS, G., MAGNENAT-THALMANN, N., (2007) "Presence and Interaction in Mixed Reality", The Visual Computer, Springer-Verlag Volume 23, Number 5, MaySEO H., MAGNENAT-THALMANN N. (2003), An Automatic Modeling of Human Bodies from Sizing Parameters. In ACM SIGGRAPH, Symposium on Interactive 3D Graphics, pp19-26, pp234. http://dx.doi.org/10.1145/641480.641487VOLINO P., MAGNENAT-THALMANN N. (2006), Resolving Surface Collisions through Intersection Contour Minimization. In ACM Transactions on Graphics (Siggraph 2006 proceedings), 25(3), pp 1154-1159. http://dx.doi.org/10.1145/1179352.1142007http://dx.doi.org/10.1145/1141911.1142007PAPAGIANNAKIS, G., SINGH, G., MAGNENAT-THALMANN, N., (2008) "A survey of mobile and wireless technologies for augmented reality systems", Journal of Computer Animation and Virtual Worlds, John Wiley and Sons Ltd, 19, 1, pp. 3-22, February http://dx.doi.org/10.1002/cav.22

    Deep Person Generation: A Survey from the Perspective of Face, Pose and Cloth Synthesis

    Full text link
    Deep person generation has attracted extensive research attention due to its wide applications in virtual agents, video conferencing, online shopping and art/movie production. With the advancement of deep learning, visual appearances (face, pose, cloth) of a person image can be easily generated or manipulated on demand. In this survey, we first summarize the scope of person generation, and then systematically review recent progress and technical trends in deep person generation, covering three major tasks: talking-head generation (face), pose-guided person generation (pose) and garment-oriented person generation (cloth). More than two hundred papers are covered for a thorough overview, and the milestone works are highlighted to witness the major technical breakthrough. Based on these fundamental tasks, a number of applications are investigated, e.g., virtual fitting, digital human, generative data augmentation. We hope this survey could shed some light on the future prospects of deep person generation, and provide a helpful foundation for full applications towards digital human

    Animating Virtual Human for Virtual Batik Modeling

    Get PDF
    This research paper describes a development of animating virtual human for virtual batik modeling project. The objectives of this project are to animate the virtual human, to map the cloth with the virtual human body, to present the batik cloth, and to evaluate the application in terms of realism of virtual human look, realism of virtual human movement, realism of 3D scene, application suitability, application usability, fashion suitability and user acceptance. The final goal is to accomplish an animated virtual human for virtual batik modeling. There are 3 essential phases which research and analysis (data collection of modeling and animating technique), development (model and animate virtual human, map cloth to body and add a music) and evaluation (evaluation of realism of virtual human look, realism of virtual human movement, realism of props, application suitability, application usability, fashion suitability and user acceptance). The result for application usability is the highest percentage which 90%. Result show that this application is useful to the people. In conclusion, this project has met the objective, which the realism is achieved by used a suitable technique for modeling and animating

    HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION

    Get PDF
    Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people
    • …
    corecore