1,056 research outputs found

    A rapid prototyping tool to produce 360° video-based immersive experiences enhanced with virtual/multimedia elements

    Get PDF
    While the popularity of virtual reality (VR) grows in a wide range of application contexts – e.g. entertainment, training, cultural heritage and medicine –, its economic impact is expected to reach around 15bn USD, by the year of 2020. Within VR field, 360° video has been sparking the interest of development and research communities. However, editing tools supporting 360° panoramas are usually expensive and/or demand programming skills and/or advanced user knowledge. Besides, application approaches to quickly and intuitively set up such 360° video-based VR environments complemented with diverse types of parameterizable virtual assets and multimedia elements are still hard to find. Thereby, this paper aims to propose a system specification to simply and rapidly configure immersive VR environments composed of surrounding 360° video spheres that can be complemented with parameterizable multimedia contents – namely 3D models, text and spatial sound –, whose behavior can be either time-range or user-interaction dependent. Moreover, a preliminary prototype that follows a substantial part of the previously mentioned specification and implements the enhancement of 360° videos with time-range dependent virtual assets is presented. Preliminary tests evaluating usability and user satisfaction were also carried out with 30 participants, from which encouraging results were achieved.This work was financed by project “CHIC – Cooperative Holistic View on Internet and Content” (N° 24498), financed the European Regional Development Fund (ERDF) through COMPETE2020 - the Operational Programme for Competitiveness and Internationalisation (OPCI)

    Foundry: Hierarchical Material Design for Multi-Material Fabrication

    Get PDF
    We demonstrate a new approach for designing functional material definitions for multi-material fabrication using our system called Foundry. Foundry provides an interactive and visual process for hierarchically designing spatially-varying material properties (e.g., appearance, mechanical, optical). The resulting meta-materials exhibit structure at the micro and macro level and can surpass the qualities of traditional composites. The material definitions are created by composing a set of operators into an operator graph. Each operator performs a volume decomposition operation, remaps space, or constructs and assigns a material composition. The operators are implemented using a domain-specific language for multi-material fabrication; users can easily extend the library by writing their own operators. Foundry can be used to build operator graphs that describe complex, parameterized, resolution-independent, and reusable material definitions. We also describe how to stage the evaluation of the final material definition which in conjunction with progressive refinement, allows for interactive material evaluation even for complex designs. We show sophisticated and functional parts designed with our system.National Science Foundation (U.S.) (1138967)National Science Foundation (U.S.) (1409310)National Science Foundation (U.S.) (1547088)National Science Foundation (U.S.). Graduate Research Fellowship ProgramMassachusetts Institute of Technology. Undergraduate Research Opportunities Progra

    Transmission adaptative de modèles 3D massifs

    Get PDF
    Avec les progrès de l'édition de modèles 3D et des techniques de reconstruction 3D, de plus en plus de modèles 3D sont disponibles et leur qualité augmente. De plus, le support de la visualisation 3D sur le web s'est standardisé ces dernières années. Un défi majeur est donc de transmettre des modèles massifs à distance et de permettre aux utilisateurs de visualiser et de naviguer dans ces environnements virtuels. Cette thèse porte sur la transmission et l'interaction de contenus 3D et propose trois contributions majeures. Tout d'abord, nous développons une interface de navigation dans une scène 3D avec des signets -- de petits objets virtuels ajoutés à la scène sur lesquels l'utilisateur peut cliquer pour atteindre facilement un emplacement recommandé. Nous décrivons une étude d'utilisateurs où les participants naviguent dans des scènes 3D avec ou sans signets. Nous montrons que les utilisateurs naviguent (et accomplissent une tâche donnée) plus rapidement en utilisant des signets. Cependant, cette navigation plus rapide a un inconvénient sur les performances de la transmission : un utilisateur qui se déplace plus rapidement dans une scène a besoin de capacités de transmission plus élevées afin de bénéficier de la même qualité de service. Cet inconvénient peut être atténué par le fait que les positions des signets sont connues à l'avance : en ordonnant les faces du modèle 3D en fonction de leur visibilité depuis un signet, on optimise la transmission et donc, on diminue la latence lorsque les utilisateurs cliquent sur les signets. Deuxièmement, nous proposons une adaptation du standard de transmission DASH (Dynamic Adaptive Streaming over HTTP), très utilisé en vidéo, à la transmission de maillages texturés 3D. Pour ce faire, nous divisons la scène en un arbre k-d où chaque cellule correspond à un adaptation set DASH. Chaque cellule est en outre divisée en segments DASH d'un nombre fixe de faces, regroupant des faces de surfaces comparables. Chaque texture est indexée dans son propre adaptation set à différentes résolutions. Toutes les métadonnées (les cellules de l'arbre k-d, les résolutions des textures, etc.) sont référencées dans un fichier XML utilisé par DASH pour indexer le contenu: le MPD (Media Presentation Description). Ainsi, notre framework hérite de la scalabilité offerte par DASH. Nous proposons ensuite des algorithmes capables d'évaluer l'utilité de chaque segment de données en fonction du point de vue du client, et des politiques de transmission qui décident des segments à télécharger. Enfin, nous étudions la mise en place de la transmission et de la navigation 3D sur les appareils mobiles. Nous intégrons des signets dans notre version 3D de DASH et proposons une version améliorée de notre client DASH qui bénéficie des signets. Une étude sur les utilisateurs montre qu'avec notre politique de chargement adaptée aux signets, les signets sont plus susceptibles d'être cliqués, ce qui améliore à la fois la qualité de service et la qualité d'expérience des utilisateur

    Doctor of Philosophy

    Get PDF
    dissertationInteractive editing and manipulation of digital media is a fundamental component in digital content creation. One media in particular, digital imagery, has seen a recent increase in popularity of its large or even massive image formats. Unfortunately, current systems and techniques are rarely concerned with scalability or usability with these large images. Moreover, processing massive (or even large) imagery is assumed to be an off-line, automatic process, although many problems associated with these datasets require human intervention for high quality results. This dissertation details how to design interactive image techniques that scale. In particular, massive imagery is typically constructed as a seamless mosaic of many smaller images. The focus of this work is the creation of new technologies to enable user interaction in the formation of these large mosaics. While an interactive system for all stages of the mosaic creation pipeline is a long-term research goal, this dissertation concentrates on the last phase of the mosaic creation pipeline - the composition of registered images into a seamless composite. The work detailed in this dissertation provides the technologies to fully realize interactive editing in mosaic composition on image collections ranging from the very small to massive in scale

    Analysis of Visualisation and Interaction Tools Authors

    Get PDF
    This document provides an in-depth analysis of visualization and interaction tools employed in the context of Virtual Museum. This analysis is required to identify and design the tools and the different components that will be part of the Common Implementation Framework (CIF). The CIF will be the base of the web-based services and tools to support the development of Virtual Museums with particular attention to online Virtual Museum.The main goal is to provide to the stakeholders and developers an useful platform to support and help them in the development of their projects, despite the nature of the project itself. The design of the Common Implementation Framework (CIF) is based on an analysis of the typical workflow ofthe V-MUST partners and their perceived limitations of current technologies. This document is based also on the results of the V-MUST technical questionnaire (presented in the Deliverable 4.1). Based on these two source of information, we have selected some important tools (mainly visualization tools) and services and we elaborate some first guidelines and ideas for the design and development of the CIF, that shall provide a technological foundation for the V-MUST Platform, together with the V-MUST repository/repositories and the additional services defined in the WP4. Two state of the art reports, one about user interface design and another one about visualization technologies have been also provided in this document

    Capture4VR: From VR Photography to VR Video

    Get PDF
    Virtual reality (VR) enables the display of dynamic visual content with unparalleled realism and immersion. However, VR is also still a relatively young medium that requires new ways to author content, particularly for visual content that is captured from the real world. This course, therefore, provides a comprehensive overview of the latest progress in bringing photographs and video into VR. Ultimately, the techniques, approaches and systems we discuss aim to faithfully capture the visual appearance and dynamics of the real world, and to bring it into virtual reality to create unparalleled realism and immersion by providing freedom of head motion and motion parallax, which is a vital depth cue for the human visual system. In this half-day course, we take the audience on a journey from VR photography to VR video that began more than a century ago but which has accelerated tremendously in the last five years. We discuss both commercial state-of-the-art systems by Facebook, Google and Microsoft, as well as the latest research techniques and prototypes

    Capture4VR: From VR Photography to VR Video

    Get PDF
    Virtual reality (VR) enables the display of dynamic visual content with unparalleled realism and immersion. However, VR is also still a relatively young medium that requires new ways to author content, particularly for visual content that is captured from the real world. This course, therefore, provides a comprehensive overview of the latest progress in bringing photographs and video into VR. Ultimately, the techniques, approaches and systems we discuss aim to faithfully capture the visual appearance and dynamics of the real world, and to bring it into virtual reality to create unparalleled realism and immersion by providing freedom of head motion and motion parallax, which is a vital depth cue for the human visual system. In this half-day course, we take the audience on a journey from VR photography to VR video that began more than a century ago but which has accelerated tremendously in the last five years. We discuss both commercial state-of-the-art systems by Facebook, Google and Microsoft, as well as the latest research techniques and prototypes

    OpenFab: A programmable pipeline for multimaterial fabrication

    Get PDF
    Figure 1: Three rhinos, defined and printed using OpenFab. For each print, the same geometry was paired with a different fablet—a shaderlike program which procedurally defines surface detail and material composition throughout the object volume. This produces three unique prints by using displacements, texture mapping, and continuous volumetric material variation as a function of distance from the surface. 3D printing hardware is rapidly scaling up to output continuous mixtures of multiple materials at increasing resolution over ever larger print volumes. This poses an enormous computational challenge: large high-resolution prints comprise trillions of voxels and petabytes of data and simply modeling and describing the input with spatially varying material mixtures at this scale is challenging. Existing 3D printing software is insufficient; in particular, most software is designed to support only a few million primitives, with discrete material choices per object. We present OpenFab, a programmable pipeline for synthesis of multi-material 3D printed objects that is inspired by RenderMan and modern GPU pipelines. The pipeline supports procedural evaluation of geometric detail and material composition, using shader-like fablets, allowing models to be specified easily and efficiently. We describe a streaming architecture for OpenFab; only a small fraction of the final volume is stored in memory and output is fed to the printer with little startup delay. We demonstrate it on a variety of multi-material objects

    PRODUCT LIFECYCLE DATA SHARING AND VISUALISATION: WEB-BASED APPROACHES

    Get PDF
    Both product design and manufacturing are intrinsically collaborative processes. From conception and design to project completion and ongoing maintenance, all points in the lifecycle of any product involve the work of fluctuating teams of designers, suppliers and customers. That is why companies are involved in the creation of a distributed design and a manufacturing environment which could provide an effective way to communicate and share information throughout the entire enterprise and the supply chain. At present, the technologies that support such a strategy are based on World Wide Web platforms and follow two different paths. The first one focuses on 2D documentation improvement and introduces 3D interactive information in order to add knowledge to drawings. The second one works directly on 3D models and tries to extend the life of 3D data moving these design information downstream through the entire product lifecycle. Unfortunately the actual lack of a unique 3D Web-based standard has stimulated the growing up of many different proprietary and open source standards and, as a consequence, a production of an incompatible information exchange over the WEB. This paper proposes a structured analysis of Web-based solutions, trying to identify the most critical aspects to promote a unique 3D digital standard model capable of sharing product and manufacturing data more effectively—regardless of geographic boundaries, data structures, processes or computing environmen

    Visibility computation through image generalization

    Get PDF
    This dissertation introduces the image generalization paradigm for computing visibility. The paradigm is based on the observation that an image is a powerful tool for computing visibility. An image can be rendered efficiently with the support of graphics hardware and each of the millions of pixels in the image reports a visible geometric primitive. However, the visibility solution computed by a conventional image is far from complete. A conventional image has a uniform sampling rate which can miss visible geometric primitives with a small screen footprint. A conventional image can only find geometric primitives to which there is direct line of sight from the center of projection (i.e. the eye) of the image; therefore, a conventional image cannot compute the set of geometric primitives that become visible as the viewpoint translates, or as time changes in a dynamic dataset. Finally, like any sample-based representation, a conventional image can only confirm that a geometric primitive is visible, but it cannot confirm that a geometric primitive is hidden, as that would require an infinite number of samples to confirm that the primitive is hidden at all of its points. ^ The image generalization paradigm overcomes the visibility computation limitations of conventional images. The paradigm has three elements. (1) Sampling pattern generalization entails adding sampling locations to the image plane where needed to find visible geometric primitives with a small footprint. (2) Visibility sample generalization entails replacing the conventional scalar visibility sample with a higher dimensional sample that records all geometric primitives visible at a sampling location as the viewpoint translates or as time changes in a dynamic dataset; the higher-dimensional visibility sample is computed exactly, by solving visibility event equations, and not through sampling. Another form of visibility sample generalization is to enhance a sample with its trajectory as the geometric primitive it samples moves in a dynamic dataset. (3) Ray geometry generalization redefines a camera ray as the set of 3D points that project at a given image location; this generalization supports rays that are not straight lines, and enables designing cameras with non-linear rays that circumvent occluders to gather samples not visible from a reference viewpoint. ^ The image generalization paradigm has been used to develop visibility algorithms for a variety of datasets, of visibility parameter domains, and of performance-accuracy tradeoff requirements. These include an aggressive from-point visibility algorithm that guarantees finding all geometric primitives with a visible fragment, no matter how small primitive\u27s image footprint, an efficient and robust exact from-point visibility algorithm that iterates between a sample-based and a continuous visibility analysis of the image plane to quickly converge to the exact solution, a from-rectangle visibility algorithm that uses 2D visibility samples to compute a visible set that is exact under viewpoint translation, a flexible pinhole camera that enables local modulations of the sampling rate over the image plane according to an input importance map, an animated depth image that not only stores color and depth per pixel but also a compact representation of pixel sample trajectories, and a curved ray camera that integrates seamlessly multiple viewpoints into a multiperspective image without the viewpoint transition distortion artifacts of prior art methods
    corecore