278 research outputs found

    Spatial Sound Rendering – A Survey

    Get PDF
    Simulating propagation of sound and audio rendering can improve the sense of realism and the immersion both in complex acoustic environments and dynamic virtual scenes. In studies of sound auralization, the focus has always been on room acoustics modeling, but most of the same methods are also applicable in the construction of virtual environments such as those developed to facilitate computer gaming, cognitive research, and simulated training scenarios. This paper is a review of state-of-the-art techniques that are based on acoustic principles that apply not only to real rooms but also in 3D virtual environments. The paper also highlights the need to expand the field of immersive sound in a web based browsing environment, because, despite the interest and many benefits, few developments seem to have taken place within this context. Moreover, the paper includes a list of the most effective algorithms used for modelling spatial sound propagation and reports their advantages and disadvantages. Finally, the paper emphasizes in the evaluation of these proposed works

    Investigating user preferences in utilizing a 2D paper or 3D sketch based interface for creating 3D virtual models

    Get PDF
    Computer modelling of 2D drawings is becoming increasingly popular in modern design as can be witnessed in the shift of modern computer modelling applications from software requiring specialised training to ones targeted for the general consumer market. Despite this, traditional sketching is still prevalent in design, particularly so in the early design stages. Thus, research trends in computer-aided modelling focus on the the development of sketch based interfaces that are as natural as possible. In this report, we present a hybrid sketch based interface which allows the user to make draw sketches using offline as well as online sketching modalities, displaying the 3D models in an immersive setup, thus linking the object interaction possible through immersive modelling to the flexibility allowed by paper-based sketching. The interface was evaluated in a user study which shows that such a hybrid system can be considered as having pragmatic and hedonic value.peer-reviewe

    Grimage: markerless 3D interactions

    Get PDF
    International audienceGrimage glues multi-camera 3D modeling, physical simulation and parallel execution for a new immersive experience. Put your hands or any object into the interaction space. It is instantaneously modeled in 3D and injected into a virtual world populated with solid and soft objects. Push them, catch them and squeeze them

    Shape Animation with Combined Captured and Simulated Dynamics

    Get PDF
    We present a novel volumetric animation generation framework to create new types of animations from raw 3D surface or point cloud sequence of captured real performances. The framework considers as input time incoherent 3D observations of a moving shape, and is thus particularly suitable for the output of performance capture platforms. In our system, a suitable virtual representation of the actor is built from real captures that allows seamless combination and simulation with virtual external forces and objects, in which the original captured actor can be reshaped, disassembled or reassembled from user-specified virtual physics. Instead of using the dominant surface-based geometric representation of the capture, which is less suitable for volumetric effects, our pipeline exploits Centroidal Voronoi tessellation decompositions as unified volumetric representation of the real captured actor, which we show can be used seamlessly as a building block for all processing stages, from capture and tracking to virtual physic simulation. The representation makes no human specific assumption and can be used to capture and re-simulate the actor with props or other moving scenery elements. We demonstrate the potential of this pipeline for virtual reanimation of a real captured event with various unprecedented volumetric visual effects, such as volumetric distortion, erosion, morphing, gravity pull, or collisions

    Semantics of immersive web through its architectural structure and graphic primitives

    Get PDF
    Currently, practices and tools for computer-aided three-dimensional design, do not allow the semantic description of objects constructed in some cases specified notations as handling layers, or labeling of each development itself. The lack of a standard for the description of the elements represents a major drawback for using advanced three-dimensional environments such as the automation of search and construction processes that require semantic knowledge of its elements. This project proposes the development the semantic composition from the hierarchy of three-dimensional visualization of graphics primitives used to construct three-dimensional objects, taking into account the geometric composition architecture of standard 19775-1 of the International Electrotechnical Commission of the International Organization for Standardization For the development of semantic composition use the methodology methontology proposed by the Universidad Politécnica de Madrid, because it allows the construction of ontologies about specific domains, limiting the domain by defining classes and subclasses, relationships and the generation of instances a framework for resource description on web ontology language

    A flexible and versatile studio for synchronized multi-view video recording

    Get PDF
    In recent years, the convergence of Computer Vision and Computer Graphics has put forth new research areas that work on scene reconstruction from and analysis of multi-view video footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint in real-time from a set of real multi-view input video streams. The analysis of real-world scenes from multi-view video to extract motion information or reflection models is another field of research that greatly benefits from high-quality input data. Building a recording setup for multi-view video involves a great effort on the hardware as well as the software side. The amount of image data to be processed is huge, a decent lighting and camera setup is essential for a naturalistic scene appearance and robust background subtraction, and the computing infrastructure has to enable real-time processing of the recorded material. This paper describes the recording setup for multi-view video acquisition that enables the synchronized recording of dynamic scenes from multiple camera positions under controlled conditions. The requirements to the room and their implementation in the separate components of the studio are described in detail. The efficiency and flexibility of the room is demonstrated on the basis of the results that we obtain with a real-time 3D scene reconstruction system, a system for non-intrusive optical motion capture and a model-based free-viewpoint video system for human actors.

    Real-time dynamics for interactive environments

    Get PDF
    This thesis examines the design and implementation of an extensible objectoriented physics engine framework. The design and implementation consolidates concepts from the wide literature in the field and clearly documents the procedures and methods. Two primary dynamic behaviors are explored: rigid body dynamics and articulated dynamics. A generalized collision response model is built for rigid bodies and articulated structures which can be adapted to other types of behaviors. The framework is designed around the use of interfaces for modularity and easy extensibility. It supports both a standalone physics engine and a supplement to a distributed immersive rendering environment. We present our results as a number of scenarios that demonstrate the viability of the framework. These scenarios include rigid bodies and articulated structures in free-fall, collision with dynamic and static bodies, resting contact, and friction. We show that we can effectively combine different dynamics into one cohesive structure. We also explain how we can efficiently extend current behaviors to develop new ones, such as altering rigid bodies to produce different collision responses or flocking behavior. Additionally, we demonstrate these scenarios in both the standalone and the immersive environment
    • …
    corecore