2 research outputs found

    Web-based Multi-layered Exploration of Annotated Image-based Shape and Material Models

    No full text
    We introduce a novel versatile approach for letting users explore detailed image-based shape and material models integrated with structured, spatially-associated descriptive information. We represent the objects of interest as a series of registered layers of image-based shape and material information. These layers are represented at multiple scales, and can come out of a variety of pipelines and include both RTI representations and spatially-varying normal and BRDF fields, eventually as a result of fusing multi-spectral data. An overlay image pyramid associates visual annotations to the various scales. The overlay pyramid of each layer can be easily authored at data preparation time using widely available image editing tools. At run-time, an annotated multi-layered dataset is made available to clients by a standard web server. Users can explore these datasets on a variety of devices, from mobile phones to large scale displays in museum installations, using JavaScript/WebGL2 clients capable to perform layer selection, interactive relighting and enhanced visualization, annotation display, and focus-and-context multiple-layer exploration using a lens metaphor. The capabilities of our approach are demonstrated on a variety of cultural heritage use cases involving different kinds of annotated surface and material models

    Scalable Exploration of Complex Objects and Environments Beyond Plain Visual Replication​

    Get PDF
    Digital multimedia content and presentation means are rapidly increasing their sophistication and are now capable of describing detailed representations of the physical world. 3D exploration experiences allow people to appreciate, understand and interact with intrinsically virtual objects. Communicating information on objects requires the ability to explore them under different angles, as well as to mix highly photorealistic or illustrative presentations of the object themselves with additional data that provides additional insights on these objects, typically represented in the form of annotations. Effectively providing these capabilities requires the solution of important problems in visualization and user interaction. In this thesis, I studied these problems in the cultural heritage-computing-domain, focusing on the very common and important special case of mostly planar, but visually, geometrically, and semantically rich objects. These could be generally roughly flat objects with a standard frontal viewing direction (e.g., paintings, inscriptions, bas-reliefs), as well as visualizations of fully 3D objects from a particular point of views (e.g., canonical views of buildings or statues). Selecting a precise application domain and a specific presentation mode allowed me to concentrate on the well defined use-case of the exploration of annotated relightable stratigraphic models (in particular, for local and remote museum presentation). My main results and contributions to the state of the art have been a novel technique for interactively controlling visualization lenses while automatically maintaining good focus-and-context parameters, a novel approach for avoiding clutter in an annotated model and for guiding users towards interesting areas, and a method for structuring audio-visual object annotations into a graph and for using that graph to improve guidance and support storytelling and automated tours. We demonstrated the effectiveness and potential of our techniques by performing interactive exploration sessions on various screen sizes and types ranging from desktop devices to large-screen displays for a walk-up-and-use museum installation. KEYWORDS - Computer Graphics, Human-Computer Interaction, Interactive Lenses, Focus-and-Context, Annotated Models, Cultural Heritage Computing
    corecore