49 research outputs found

    Design and prototyping of an interactive virtual environment to foster citizen participation and creativity in urban design

    Get PDF
    Public Participation encounters great challenges in the domain of urban design concerning decision making and citizens’ appropriation of a future place. Many tools and methods have been proposed to ease the participation process. In this paper we are targeting artefacts used in face-to-face workshops, in which citizens are asked to make design proposals for a public space. We claim that current state of the art can be improved (i) by better articulating digital artefacts with participatory processes and (ii) by providing interfaces that enhance citizen’s spatial awareness and comprehension as well as collective creativity in urban design projects. We present the design and prototyping of an interactive virtual environment that follow the design-science research guidelines.U_CODE project (H2020 No 688873

    Effective Multi-resolution Rendering and Texture Compression for Captured Volumetric Trees

    Get PDF
    Trees can be realistically rendered in synthetic environments by creating volumetric representations from photographs. Volumetric trees created with previous methods are expensive to render due to the high number of primitives, and have very high texture memory requirements. We present an efficient multi-resolution rendering method and an effective texture compression solution, addressing both shortcomings. Our method uses an octree with appropriate textures at intermediate hierarchy levels and applies an effective pruning strategy. For texture compression, we adapt a vector quantization approach to use a perceptually accurate colour space, and modify the codebook generation of the Generalized Lloyd Algorithm to further improve texture quality. Combined with several hardware accelerations, our approach achieves a two orders of magnitude reduction in texture memory requirements; in addition, it is now possible to render tens or even hundreds of captured trees at interactive rates

    Vectorising Bitmaps into Semi‐Transparent Gradient Layers

    Get PDF
    International audienceWe present an interactive approach for decompositing bitmap drawings and studio photographs into opaque and semi-transparent vector layers. Semi-transparent layers are especially challenging to extract, since they require the inversion of the non-linear compositing equation. We make this problem tractable by exploiting the parametric nature of vector gradients, jointly separating and vectorising semi-transparent regions. Specifically, we constrain the foreground colours to vary according to linear or radial parametric gradients, restricting the number of unknowns and allowing our system to efficiently solve for an editable semi-transparent foreground. We propose a progressive workflow, where the user successively selects a semi-transparent or opaque region in the bitmap, which our algorithm separates as a foreground vector gradient and a background bitmap layer. The user can choose to decompose the background further or vectorise it as an opaque layer. The resulting layered vector representation allows a variety of edits, such as modifying the shape of highlights, adding texture to an object or changing its diffuse colour

    High-Quality Adaptive Soft Shadow Mapping

    No full text
    International audienceThe recent soft shadow mapping technique allows the rendering in real-time of convincing soft shadows on complex and dynamic scenes using a single shadow map. While attractive, this method suffers from shadow overestimation and becomes both expensive and approximate when dealing with large penumbrae. This paper proposes new solutions removing these limitations and hence providing an efficient and practical technique for soft shadow generation. First, we propose a new visibility computation procedure based on the detection of occluder contours which is more accurate and faster while reducing aliasing. Secondly, we present a shadow map multi-resolution strategy keeping the computation complexity almost independent on the light size while maintaining high-quality rendering. Finally, we propose a view dependent adaptive strategy, that automatically reduces the screen resolution in the region of large penumbrae, thus allowing us to keep very high frame rates in any situation

    Evaluation of direct manipulation using finger tracking for complex tasks in an immersive cube

    No full text
    A solution for interaction using finger tracking in a cubic immersive virtual reality system (or immersive cube) is presented. Rather than using a traditional wand device, users can manipulate objects with fingers of both hands in a close-to-natural manner for moderately complex, general purpose tasks. Our solution couples finger tracking with a real-time physics engine, combined with a heuristic approach for hand manipulation, which is robust to tracker noise and simulation instabilities. A first study has been performed to evaluate our interface, with tasks involving complex manipulations, such as balancing objects while walking in the cube. The user's finger-tracked manipulation was compared to manipulation with a 6 degree-of-freedom wand (or flystick), as well as with carrying out the same task in the real world. Users were also asked to perform a free task, allowing us to observe their perceived level of presence in the scene. Our results show that our approach provides a feasible interface for immersive cube environments and is perceived by users as being closer to the real experience compared to the wand. However, the wand outperforms direct manipulation in terms of speed and precision. We conclude with a discussion of the results and implications for further research. © 2014 Springer-Verlag London

    Video‐Based Rendering of Dynamic Stationary Environments from Unsynchronized Inputs

    Get PDF
    International audienceImage-Based Rendering allows users to easily capture a scene using a single camera and then navigate freely with realistic results. However, the resulting renderings are completely static, and dynamic effects – such as fire, waterfalls or small waves – cannot be reproduced. We tackle the challenging problem of enabling free-viewpoint navigation including such stationary dynamic effects, but still maintaining the simplicity of casual capture. Using a single camera – instead of previous complex synchronized multi-camera setups – means that we have unsynchronized videos of the dynamic effect from multiple views, making it hard to blend them when synthesizing novel views. We present a solution that allows smooth free-viewpoint video-based rendering (VBR) of such scenes using temporal Laplacian pyramid decomposition video, enabling spatio-temporal blending. For effects such as fire and waterfalls, that are semi-transparent and occupy 3D space, we first estimate their spatial volume. This allows us to create per-video geometries and alpha-matte videos that we can blend using our frequency-dependent method. We also extend Laplacian blending to the temporal dimension to remove additional temporal seams. We show results on scenes containing fire, waterfalls or rippling waves at the seaside, bringing these scenes to life
    corecore