89,341 research outputs found

    Recreating Daily life in Pompeii

    Full text link
    [EN] We propose an integrated Mixed Reality methodology for recreating ancient daily life that features realistic simulations of animated virtual human actors (clothes, body, skin, face) who augment real environments and re-enact staged storytelling dramas. We aim to go further from traditional concepts of static cultural artifacts or rigid geometrical and 2D textual augmentations and allow for 3D, interactive, augmented historical character-based event representations in a mobile and wearable setup. This is the main contribution of the described work as well as the proposed extensions to AR Enabling technologies: a VR/AR character simulation kernel framework with real-time, clothed virtual humans that are dynamically superimposed on live camera input, animated and acting based on a predefined, historically correct scenario. We demonstrate such a real-time case study on the actual site of ancient Pompeii.The work presented has been supported by the Swiss Federal Office for Education and Science and the EU IST programme, in frame of the EU IST LIFEPLUS 34545 and EU ICT INTERMEDIA 38417 projects.Magnenat-Thalmann, N.; Papagiannakis, G. (2010). Recreating Daily life in Pompeii. Virtual Archaeology Review. 1(2):19-23. https://doi.org/10.4995/var.2010.4679OJS192312P. MILGRAM, F. KISHINO, (1994) "A Taxonomy of Mixed Reality Visual Displays", IEICE Trans. Information Systems, vol. E77-D, no. 12, pp. 1321-1329R. AZUMA, Y. BAILLOT, R. BEHRINGER, S. FEINER, S. JULIER, B. MACINTYRE, (2001) "Recent Advances in Augmented Reality", IEEE Computer Graphics and Applications, November/December http://dx.doi.org/10.1109/38.963459D. STRICKER, P. DÄHNE, F. SEIBERT, I. CHRISTOU, L. ALMEIDA, N. IOANNIDIS, (2001) "Design and Development Issues for ARCHEOGUIDE: An Augmented Reality-based Cultural Heritage On-site Guide", EuroImage ICAV 3D Conference in Augmented Virtual Environments and Three-dimensional Imaging, Mykonos, Greece, 30 May-01 JuneW. WOHLGEMUTH, G. TRIEBFÜRST, (2000)"ARVIKA: augmented reality for development, production and service", DARE 2000 on Designing augmented reality environments, Elsinore, Denmark http://dx.doi.org/10.1145/354666.354688H. TAMURA, H. YAMAMOTO, A. KATAYAMA, (2001) "Mixed reality: Future dreams seen at the border between real and virtual worlds", Computer Graphics and Applications, vol.21, no.6, pp.64-70 http://dx.doi.org/10.1109/38.963462M. PONDER, G. PAPAGIANNAKIS, T. MOLET, N. MAGNENAT-THALMANN, D. THALMANN, (2003) "VHD++ Development Framework: Towards Extendible, Component Based VR/AR Simulation Engine Featuring Advanced Virtual Character Technologies", IEEE Computer Society Press, CGI Proceedings, pp. 96-104 http://dx.doi.org/10.1109/cgi.2003.1214453Archaeological Superintendence of Pompeii (2009), http://www.pompeiisites.orgG. PAPAGIANNAKIS, S. SCHERTENLEIB, B. O'KENNEDY , M. POIZAT, N.MAGNENAT-THALMANN, A. STODDART, D.THALMANN, (2005) "Mixing Virtual and Real scenes in the site of ancient Pompeii",Journal of CAVW, p 11-24, Volume 16, Issue 1, John Wiley and Sons Ltd, FebruaryEGGES, A., PAPAGIANNAKIS, G., MAGNENAT-THALMANN, N., (2007) "Presence and Interaction in Mixed Reality", The Visual Computer, Springer-Verlag Volume 23, Number 5, MaySEO H., MAGNENAT-THALMANN N. (2003), An Automatic Modeling of Human Bodies from Sizing Parameters. In ACM SIGGRAPH, Symposium on Interactive 3D Graphics, pp19-26, pp234. http://dx.doi.org/10.1145/641480.641487VOLINO P., MAGNENAT-THALMANN N. (2006), Resolving Surface Collisions through Intersection Contour Minimization. In ACM Transactions on Graphics (Siggraph 2006 proceedings), 25(3), pp 1154-1159. http://dx.doi.org/10.1145/1179352.1142007http://dx.doi.org/10.1145/1141911.1142007PAPAGIANNAKIS, G., SINGH, G., MAGNENAT-THALMANN, N., (2008) "A survey of mobile and wireless technologies for augmented reality systems", Journal of Computer Animation and Virtual Worlds, John Wiley and Sons Ltd, 19, 1, pp. 3-22, February http://dx.doi.org/10.1002/cav.22

    Inviwo -- A Visualization System with Usage Abstraction Levels

    Full text link
    The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities

    Learning to Reconstruct People in Clothing from a Single RGB Camera

    No full text
    We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach

    What May Visualization Processes Optimize?

    Full text link
    In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.Comment: 10 page
    corecore