1,443 research outputs found

    TADA! Text to Animatable Digital Avatars

    Full text link
    We introduce TADA, a simple-yet-effective approach that takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures, that can be animated and rendered with traditional graphics pipelines. Existing text-based character generation methods are limited in terms of geometry and texture quality, and cannot be realistically animated due to inconsistent alignment between the geometry and the texture, particularly in the face region. To overcome these limitations, TADA leverages the synergy of a 2D diffusion model and an animatable parametric body model. Specifically, we derive an optimizable high-resolution body model from SMPL-X with 3D displacements and a texture map, and use hierarchical rendering with score distillation sampling (SDS) to create high-quality, detailed, holistic 3D avatars from text. To ensure alignment between the geometry and texture, we render normals and RGB images of the generated character and exploit their latent embeddings in the SDS training process. We further introduce various expression parameters to deform the generated character during training, ensuring that the semantics of our generated character remain consistent with the original SMPL-X model, resulting in an animatable character. Comprehensive evaluations demonstrate that TADA significantly surpasses existing approaches on both qualitative and quantitative measures. TADA enables creation of large-scale digital character assets that are ready for animation and rendering, while also being easily editable through natural language. The code will be public for research purposes

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    Virtual Reality

    Get PDF
    At present, the virtual reality has impact on information organization and management and even changes design principle of information systems, which will make it adapt to application requirements. The book aims to provide a broader perspective of virtual reality on development and application. First part of the book is named as "virtual reality visualization and vision" and includes new developments in virtual reality visualization of 3D scenarios, virtual reality and vision, high fidelity immersive virtual reality included tracking, rendering and display subsystems. The second part named as "virtual reality in robot technology" brings forth applications of virtual reality in remote rehabilitation robot-based rehabilitation evaluation method and multi-legged robot adaptive walking in unstructured terrains. The third part, named as "industrial and construction applications" is about the product design, space industry, building information modeling, construction and maintenance by virtual reality, and so on. And the last part, which is named as "culture and life of human" describes applications of culture life and multimedia-technology

    Designing, testing and adapting navigation techniques for the immersive web

    Get PDF
    One of the most essential interactions in Virtual Reality (VR) is the user’s ability to move around and explore the virtual environment. The design of the navigation technique plays a crucial role in the user experience since it determines key usability aspects. VR devices allow for an immersive exploration of 3D worlds, but navigation in VR is challenging for many users, due to potential usability issues related to specific VR controllers, user skills, and motion sickness. Although hundreds of interaction techniques have been proposed for this task, VR navigation still poses a high entry barrier for many users. In this paper we argue that adapting the navigation technique to its context of use can lead to substantial improvements in navigation usability and accessibility. The context of use includes the type of scene, the available physical space, as well as the profile of the user. We present a test platform to facilitate the design and fine-tuning of interaction techniques for 3D navigation. We focus on mainstream VR devices (headsets and controllers) and support the most common navigation metaphors (walking, flying, teleportation). The key idea is to let developers specify, at runtime, the exact mapping between user actions and locomotion changes, for any of the supported metaphors. Such mappings are described by a collection of parameters (e.g. maximum speed) whose values can be adjusted interactively through a GUI, or be provided by user-defined code which can be edited at runtime. Feedback obtained from developers suggests that this approach can be used to quickly adapt the navigation techniques to various people including persons with no previous 3D navigation skills, elderly people, and people with disabilities, as well as to the type, size and semantics of the virtual environment.This work has been funded by MCIN/AEI/10.13039/501100011033/FEDER ‘‘A way to make Europe’’. Pedret model partially funded by EU Horizon 2020, JPICH Conservation, Protection and Use initiative (JPICH-0127) and the Spanish Agencia Estatal de Investigación, grant PCI2020-111979 Enhancement of Heritage Experiences: the Middle Ages; Digital Layered Models of Architecture and Mural Paintings over Time (EHEM)Peer ReviewedPostprint (published version

    Survey on Controlable Image Synthesis with Deep Learning

    Full text link
    Image synthesis has attracted emerging research interests in academic and industry communities. Deep learning technologies especially the generative models greatly inspired controllable image synthesis approaches and applications, which aim to generate particular visual contents with latent prompts. In order to further investigate low-level controllable image synthesis problem which is crucial for fine image rendering and editing tasks, we present a survey of some recent works on 3D controllable image synthesis using deep learning. We first introduce the datasets and evaluation indicators for 3D controllable image synthesis. Then, we review the state-of-the-art research for geometrically controllable image synthesis in two aspects: 1) Viewpoint/pose-controllable image synthesis; 2) Structure/shape-controllable image synthesis. Furthermore, the photometrically controllable image synthesis approaches are also reviewed for 3D re-lighting researches. While the emphasis is on 3D controllable image synthesis algorithms, the related applications, products and resources are also briefly summarized for practitioners.Comment: 19 pages, 17 figure

    Analysis of Visualisation and Interaction Tools Authors

    Get PDF
    This document provides an in-depth analysis of visualization and interaction tools employed in the context of Virtual Museum. This analysis is required to identify and design the tools and the different components that will be part of the Common Implementation Framework (CIF). The CIF will be the base of the web-based services and tools to support the development of Virtual Museums with particular attention to online Virtual Museum.The main goal is to provide to the stakeholders and developers an useful platform to support and help them in the development of their projects, despite the nature of the project itself. The design of the Common Implementation Framework (CIF) is based on an analysis of the typical workflow ofthe V-MUST partners and their perceived limitations of current technologies. This document is based also on the results of the V-MUST technical questionnaire (presented in the Deliverable 4.1). Based on these two source of information, we have selected some important tools (mainly visualization tools) and services and we elaborate some first guidelines and ideas for the design and development of the CIF, that shall provide a technological foundation for the V-MUST Platform, together with the V-MUST repository/repositories and the additional services defined in the WP4. Two state of the art reports, one about user interface design and another one about visualization technologies have been also provided in this document

    Visual Techniques for Geological Fieldwork Using Mobile Devices

    Get PDF
    Visual techniques in general and 3D visualisation in particular have seen considerable adoption within the last 30 years in the geosciences and geology. Techniques such as volume visualisation, for analysing subsurface processes, and photo-coloured LiDAR point-based rendering, to digitally explore rock exposures at the earth’s surface, were applied within geology as one of the first adopting branches of science. A large amount of digital, geological surface- and volume data is nowadays available to desktop-based workflows for geological applications such as hydrocarbon reservoir exploration, groundwater modelling, CO2 sequestration and, in the future, geothermal energy planning. On the other hand, the analysis and data collection during fieldwork has yet to embrace this ”digital revolution”: sedimentary logs, geological maps and stratigraphic sketches are still captured in each geologist’s individual fieldbook, and physical rocks samples are still transported to the lab for subsequent analysis. Is this still necessary, or are there extended digital means of data collection and exploration in the field ? Are modern digital interpretation techniques accurate and intuitive enough to relevantly support fieldwork in geology and other geoscience disciplines ? This dissertation aims to address these questions and, by doing so, close the technological gap between geological fieldwork and office workflows in geology. The emergence of mobile devices and their vast array of physical sensors, combined with touch-based user interfaces, high-resolution screens and digital cameras provide a possible digital platform that can be used by field geologists. Their ubiquitous availability increases the chances to adopt digital workflows in the field without additional, expensive equipment. The use of 3D data on mobile devices in the field is furthered by the availability of 3D digital outcrop models and the increasing ease of their acquisition. This dissertation assesses the prospects of adopting 3D visual techniques and mobile devices within field geology. The research of this dissertation uses previously acquired and processed digital outcrop models in the form of textured surfaces from optical remote sensing and photogrammetry. The scientific papers in this thesis present visual techniques and algorithms to map outcrop photographs in the field directly onto the surface models. Automatic mapping allows the projection of photo interpretations of stratigraphy and sedimentary facies on the 3D textured surface while providing the domain expert with simple-touse, intuitive tools for the photo interpretation itself. The developed visual approach, combining insight from all across the computer sciences dealing with visual information, merits into the mobile device Geological Registration and Interpretation Toolset (GRIT) app, which is assessed on an outcrop analogue study of the Saltwick Formation exposed at Whitby, North Yorkshire, UK. Although being applicable to a diversity of study scenarios within petroleum geology and the geosciences, the particular target application of the visual techniques is to easily provide field-based outcrop interpretations for subsequent construction of training images for multiple point statistics reservoir modelling, as envisaged within the VOM2MPS project. Despite the success and applicability of the visual approach, numerous drawbacks and probable future extensions are discussed in the thesis based on the conducted studies. Apart from elaborating on more obvious limitations originating from the use of mobile devices and their limited computing capabilities and sensor accuracies, a major contribution of this thesis is the careful analysis of conceptual drawbacks of established procedures in modelling, representing, constructing and disseminating the available surface geometry. A more mathematically-accurate geometric description of the underlying algebraic surfaces yields improvements and future applications unaddressed within the literature of geology and the computational geosciences to this date. Also, future extensions to the visual techniques proposed in this thesis allow for expanded analysis, 3D exploration and improved geological subsurface modelling in general.publishedVersio

    Enabling the Development and Implementation of Digital Twins : Proceedings of the 20th International Conference on Construction Applications of Virtual Reality

    Get PDF
    Welcome to the 20th International Conference on Construction Applications of Virtual Reality (CONVR 2020). This year we are meeting on-line due to the current Coronavirus pandemic. The overarching theme for CONVR2020 is "Enabling the development and implementation of Digital Twins". CONVR is one of the world-leading conferences in the areas of virtual reality, augmented reality and building information modelling. Each year, more than 100 participants from all around the globe meet to discuss and exchange the latest developments and applications of virtual technologies in the architectural, engineering, construction and operation industry (AECO). The conference is also known for having a unique blend of participants from both academia and industry. This year, with all the difficulties of replicating a real face to face meetings, we are carefully planning the conference to ensure that all participants have a perfect experience. We have a group of leading keynote speakers from industry and academia who are covering up to date hot topics and are enthusiastic and keen to share their knowledge with you. CONVR participants are very loyal to the conference and have attended most of the editions over the last eighteen editions. This year we are welcoming numerous first timers and we aim to help them make the most of the conference by introducing them to other participants
    • …
    corecore