338 research outputs found

    A conceptual framework for multi-modal interactive virtual workspaces

    Get PDF
    Construction projects involve a large number of both direct stakeholders (clients, professional teams, contractors, etc.) and indirect stakeholders (local authorities, residents, workers, etc.). Current methods of communicating building design information can lead to several types of difficulties (e.g. incomplete understanding of the planned construction, functional inefficiencies, inaccurate initial work or clashes between components, etc.). Integrated software solutions based on VR technologies can bring significant value improvement and cost reduction to the Construction Industry. The aim of this paper is to present research being carried out in the frame of the DIVERCITY project (Distributed Virtual Workspace for Enhancing Communication within the Construction Industry - IST project n°13365), funded under the European IST programme (Information Society Technologies). DIVERCITY's goal is to develop a Virtual Workspace that addresses three key building construction phases: (1) Client briefing (with detailed interaction between clients and architects); (2) Design Review (which requires detailed input from multidisciplinary teams - architects, engineers, facility managers, etc.); (3) Construction (aiming to fabricate or refurbish the building).Using a distributed architecture, the DIVERCITY system aims to support and enhance concurrent engineering practices for these three phases allowing teams based in different geographic locations to collaboratively design, test and validate shared virtual projects. The global DIVERCITY project will be presented in terms of objectives and the software architecture will be detailed.149-162Pubblicat

    Performance Enhancement of Multicore Architecture

    Get PDF
    Multicore processors integrate several cores on a single chip. The fixed architecture of multicore platforms often fails to accommodate the inherent diverse requirements of different applications. The permanent need to enhance the performance of multicore architecture motivates the development of a dynamic architecture. To address this issue, this paper presents new algorithms for thread selection in fetch stage. Moreover, this paper presents three new fetch stage policies, EACH_LOOP_FETCH, INC-FETCH, and WZ-FETCH, based on Ordinary Least Square (OLS) regression statistic method. These new fetch policies differ on thread selection time which is represented by instructions’ count and window size. Furthermore, the simulation multicore tool, , is adapted to cope with multicore processor dynamic design by adding a dynamic feature in the policy of thread selection in fetch stage. SPLASH2, parallel scientific workloads, has been used to validate the proposed adaptation for multi2sim. Intensive simulated experiments have been conducted and the obtained results show that remarkable performance enhancements have been achieved in terms of execution time and number of instructions per second produces less broadcast operations compared to the typical algorithm

    Overlapping Multi-Processing and Graphics Hardware Acceleration: Performance Evaluation

    Get PDF
    Colloque avec actes et comité de lecture.Recently, multi-processing has been shown to deliver good performance in rendering. However, in some applications, processors spend too much time executing tasks that could be more efficiently done through intensive use of new graphics hardware. We present in this paper a novel solution combining multi-processing and advanced graphics hardware, where graphics pipelines are used both for classical visualization tasks and to advantageously perform geometric calculations while remaining computations are handled by multi-processors. The experiment is based on an implementation of a new parallel wavelet radiosity algorithm. The application is executed on the SGI Origin2000 connected to the SGI InfiniteReality2 rendering pipeline. A performance evaluation is presented. Keeping in mind that the approach can benefit all available workstations and super-computers, from small scale (2 processors and 1 graphics pipeline) to large scale (pp processors and nn graphics pipelines), we highlight some important bottlenecks that impede performance. However, our results show that this approach could be a promising avenue for scientific and engineering simulation and visualization applications that need intensive geometric calculations

    Computational tools for low energy building design : capabilities and requirements

    Get PDF
    Integrated building performance simulation (IBPS) is an established technology, with the ability to model the heat, mass, light, electricity and control signal flows within complex building/plant systems. The technology is used in practice to support the design of low energy solutions and, in Europe at least, such use is set to expand with the advent of the Energy Performance of Buildings Directive, which mandates a modelling approach to legislation compliance. This paper summarises IBPS capabilities and identifies developments that aim to further improving integrity vis-à-vis the reality

    Wavelets in computer graphics

    Full text link

    Hardware Acceleration of Progressive Refinement Radiosity using Nvidia RTX

    Full text link
    A vital component of photo-realistic image synthesis is the simulation of indirect diffuse reflections, which still remain a quintessential hurdle that modern rendering engines struggle to overcome. Real-time applications typically pre-generate diffuse lighting information offline using radiosity to avoid performing costly computations at run-time. In this thesis we present a variant of progressive refinement radiosity that utilizes Nvidia's novel RTX technology to accelerate the process of form-factor computation without compromising on visual fidelity. Through a modern implementation built on DirectX 12 we demonstrate that offloading radiosity's visibility component to RT cores significantly improves the lightmap generation process and potentially propels it into the domain of real-time.Comment: 114 page

    Modelling polarized light for computer graphics

    Get PDF
    The quality of visual realism in computer generated images is largely determined by the accuracy of the reflection model. Advances in global illumination techniques have removed to a large extent, some of the limitations on the physical correctness achievable by reflection models. While models currently used by researchers are physically based, most approaches have ignored the polarization of light. The few previous efforts addressing the polarization of light were hampered by inherently unphysical light transport algorithms. This paper, besides taking polarization of light into account in the reflection computation, also provides a basis for modelling polarization as an inherent attribute of light, using the Stokes parameters. A reflection model is developed within this framework and the implementation within a global illumination algorithm called Photon is presented

    New acquisition techniques for real objects and light sources in computer graphics

    Get PDF
    Accurate representations of objects and light sources in a scene model are a crucial prerequisite for realistic image synthesis using computer graphics techniques. This thesis presents techniques for the effcient acquisition of real world objects and real world light sources, as well as an assessment of the quality of the acquired models. Making use of color management techniques, we setup an appearance reproduction pipeline that ensures best-possible reproduction of local light reflection with the available input and output devices. We introduce a hierarchical model for the subsurface light transport in translucent objects, derive an acquisition methodology, and acquire models of several translucent objects that can be rendered interactively. Since geometry models of real world objects are often acquired using 3D range scanners, we also present a method based on the concept of modulation transfer functions to evaluate their accuracy. In order to illuminate a scene with realistic light sources, we propose a method to acquire a model of the near-field emission pattern of a light source with optical prefiltering. We apply this method to several light sources with different emission characteristics and demonstrate the integration of the acquired models into both, global illumination as well as hardware-accelerated rendering systems.Exakte Repräsentationen der Objekte und Lichtquellen in einem Modell einer Szene sind eine unerlässliche Voraussetzung für die realistische Bilderzeugung mit Techniken der Computergraphik. Diese Dissertation beschäftigt sich mit der effizienten Digitalisierung von realen Objekten und realen Lichtquellen. Dabei werden sowohl neue Digitalisierungstechniken als auch Methoden zur Bestimmung der Qualität der erzeugten Modelle vorgestellt. Wir schlagen eine Verarbeitungskette zur Digitalisierung und Wiedergabe der Farbe und Spekularität von Objekten vor, die durch Ausnutzung von Farbmanagementtechniken eine bestmögliche Wiedergabe des Objekts unter Verwendung der gegebenen Ein- und Ausgabegeräte ermöglicht. Wir führen weiterhin ein hierarchisches Modell für den Lichttransport im Inneren von Objekten aus durchscheinenden Materialien sowie eine zugehörige Akquisitionsmethode ein und digitalisieren mehrere reale Objekte. Die dabei erzeugten Modelle können in Echtzeit angezeigt werden. Die Geometrie realer Objekte spielt eine entscheidende Rolle in vielen Anwendungen und wird oftmals unter Verwendung von 3D Scannern digitalisiert. Wir entwickeln daher eine Methode zur Bestimmung der Genauigkeit eines 3D Scanners, die auf dem Konzept der Modulationstransferfunktion basiert. Um eine Szene mit realen Lichtquellen beleuchten zu können, schlagen wir ferner eine Methode zur Erfassung der Nahfeldabstrahlung eine Lichtquelle vor, bei der vor der Digitalisierung ein optischer Filterungsschritt durchgeführt wird. Wir wenden diese Methode zur Digitalisierung mehrerer Lichtquellen mit unterschiedlichen Abstrahlcharakteristika an und zeigen auf, wie die dabei erzeugten Modelle in globalen Beleuchtungsberechnungen sowie bei der Bildsynthese mittels moderner Graphikkarten verwendet werden können

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Fast and interactive ray-based rendering

    Get PDF
    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonDespite their age, ray-based rendering methods are still a very active field of research with many challenges when it comes to interactive visualization. In this thesis, we present our work on Guided High-Quality Rendering, Foveated Ray Tracing for Head Mounted Displays and Hash-based Hierarchical Caching and Layered Filtering. Our system for Guided High-Quality Rendering allows for guiding the sampling rate of ray-based rendering methods by a user-specified Region of Interest (RoI). We propose two interaction methods for setting such an RoI when using a large display system and a desktop display, respectively. This makes it possible to compute images with a heterogeneous sample distribution across the image plane. Using such a non-uniform sample distribution, the rendering performance inside the RoI can be significantly improved in order to judge specific image features. However, a modified scheduling method is required to achieve sufficient performance. To solve this issue, we developed a scheduling method based on sparse matrix compression, which has shown significant improvements in our benchmarks. By filtering the sparsely sampled image appropriately, large brightness variations in areas outside the RoI are avoided and the overall image brightness is similar to the ground truth early in the rendering process. When using ray-based methods in a VR environment on head-mounted display de vices, it is crucial to provide sufficient frame rates in order to reduce motion sickness. This is a challenging task when moving through highly complex environments and the full image has to be rendered for each frame. With our foveated rendering sys tem, we provide a perception-based method for adjusting the sample density to the user’s gaze, measured with an eye tracker integrated into the HMD. In order to avoid disturbances through visual artifacts from low sampling rates, we introduce a reprojection-based rendering pipeline that allows for fast rendering and temporal accumulation of the sparsely placed samples. In our user study, we analyse the im pact our system has on visual quality. We then take a closer look at the recorded eye tracking data in order to determine tracking accuracy and connections between different fixation modes and perceived quality, leading to surprising insights. For previewing global illumination of a scene interactively by allowing for free scene exploration, we present a hash-based caching system. Building upon the concept of linkless octrees, which allow for constant-time queries of spatial data, our frame work is suited for rendering such previews of static scenes. Non-diffuse surfaces are supported by our hybrid reconstruction approach that allows for the visualization of view-dependent effects. In addition to our caching and reconstruction technique, we introduce a novel layered filtering framework, acting as a hybrid method between path space and image space filtering, that allows for the high-quality denoising of non-diffuse materials. Also, being designed as a framework instead of a concrete filtering method, it is possible to adapt most available denoising methods to our layered approach instead of relying only on the filtering of primary hitpoints
    • …
    corecore