5 research outputs found

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality

    Get PDF
    3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character\u27s behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant\u27s pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting their mobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision

    ACMS 18th Biennial Conference Proceedings

    Get PDF
    Association of Christians in the Mathematical Sciences 18th Biennial Conference Proceedings, June 1-4, 2011, Westmont College, Santa Barbara, CA

    Interactive visualization of computational fluid dynamics data.

    Get PDF
    This thesis describes a literature study and a practical research in the area of flow visualization, with special emphasis on the interactive visualization of Computational Fluid Dynamics (CFD) datasets. Given the four main categories of flow visualization methodology; direct, geometric, texture-based and feature-based flow visualization, the research focus of our thesis is on the direct, geometric and feature-based techniques. And the feature-based flow visualization is highlighted in this thesis. After we present an overview of the state-of-the-art of the recent developments in the flow visualization in higher spatial dimensions (2.5D, 3D and 4D), we propose a fast, simple, and interactive glyph placement algorithm for investigating and visualizing boundary flow data based on unstructured, adaptive resolution boundary meshes from CFD dataset. Afterward, we propose a novel, automatic mesh-driven vector field clustering algorithm which couples the properties of the vector field and resolution of underlying mesh into a unified distance measure for producing high-level, intuitive and suggestive visualization of large, unstructured, adaptive resolution boundary CFD meshes based vector fields. Next we present a novel application with multiple-coordinated views for interactive information-assisted visualization of multidimensional marine turbine CFD data. Information visualization techniques are combined with user interaction to exploit our cognitive ability for intuitive extraction of flow features from CFD datasets. Later, we discuss the design and implementation of each visualization technique used in our interactive flow visualization framework, such as glyphs, streamlines, parallel coordinate plots, etc. In this thesis, we focus on the interactive visualization of the real-world CFD datasets, and present a number of new methods or algorithms to address several related challenges in flow visualization. We strongly believe that the user interaction is a crucial part of an effective data analysis and visualization of large and complex datasets such as CFD datasets we use in this thesis. In order to demonstrate the use of the proposed techniques in this thesis, CFD domain experts reviews are also provided

    Spatiotemporal enabled Content-based Image Retrieval

    Full text link
    corecore