842 research outputs found

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Doctor of Philosophy

    Get PDF
    dissertationI present a new migration algorithm denoted as generalized di raction-stack migration (GDM). Unlike traditional di raction-stack migration, it accounts for all arrivals in the wave eld, including two-way primaries and multiple arrivals, and it is not subject to the high-frequency approximation of ray tracing. It is as accurate as reverse-time migration (RTM), but, unlike RTM, ltering and muting can be easily applied to the migration operator to reduce artifacts due to aliasing, and unwanted events such as multiples. Unlike RTM, GDM can be applied to common o set gathers. The main drawback of GDM is that it can be more than an order-of-magnitude more computationally expensive than RTM, and requires much more memory for e cient use. To mitigate some of these disadvantages, I present a multisource least-squares GDM method with phase-encoding. There are six chapters presented after the introduction. Chapter 2 derives the GDM equation by reformulating the standard RTM equation, and shows how GDM is related to the traditional di raction-stack migration. Chapter 3 shows how the GDM kernel can be ltered to eliminate coherent noise in the migration image. This precise ltering of the migration operator cannot be done with the standard RTM approach, but it can now be performed with the GDM method. In Chapter 4, I develop an antialiasing lter for GDM. This idea is adapted from the traditional antialiasing strategy for Kirchho migration, except GDM antialiasing accounts for both primary and multiple re ection events. This is novel antialiasing lter that can be used for ltering the RTM-like imaging operator. In Chapter 5, I show how to mute or lter the GDM operator to emphasize multiple re ection events. I split the GDM operator into two separate parts, the primary migration operator and the multiple migration operator. By computing the dot-product of the migration operators with the data, followed by an optimal stack of the primary-only image and the multiple-only image, a higher resolution in the migration image can be achieved. An additional bene t is that cross-talk between primary and multiple scattered arrivals, often seen in conventional RTM images, are greatly attenuated. Finally, Chapter 6 presents an e cient implementation of least-squares GDM with supergathers. The supergather consists of a blend of many encoded shot gathers, each one with a unique encoding function that mitigates crosstalk in the migration image. A unique feature of GDM is that the Green's functions (computed by a nite-di erence solution to the wave equation) can be reused at each iteration. Unlike conventional least-squares RTM, no new nite-di erence simulations are needed to get the updated migration image. This can result in almost two orders-of-magnitude reduction in cost for iterative least-squares migration. Furthermore, when the least-squares GDM is combined with phase-encoded multisource technology, the cost savings are even greater. This is a subject that is discussed in Chapter 7. The main challenge with GDM is that it demands much more memory and I/O cost than standard RTM algorithm. As a partial remedy, Appendix A describes how to e ciently compute the migration operators either in a target-oriented mode or by using wave equation wavefront modeling. In addition, the intensive I/O and storage costs can be partly, not fully, mitigated by applying a wavelet transform with compression, where a compression ratio of at least an order-of-magnitude can be achieved with a small loss of accuracy. This topic is addressed in Appendix B

    Acces to Z-buffer information for the development of a video game with a depth effect as a mechanic

    Get PDF
    Treball final de Grau en Disseny i Desenvolupament de Videojocs. Codi: VJ1241. Curs acadèmic: 2018/2019This document presents the whole work done for the final project of the Design and Development of Video games degree, based on the creation of an experience made in Unity3D and built for PC platform. The main mechanic of the game is a temporal jump between two different dimensions, achieved by doing a depth sweep with in depth with a Z-Buffer information. More specifically, in order to reach a better immersion in the game environment and make the effect more impressive, the camera will be in first person perspective. Furthermore, there will be a small story behind the gameplay to add a touch of mystery to the experience

    Computer Control: An Overview

    Get PDF
    Computer control is entering all facets of life from home electronics to production of different products and material. Many of the computers are embedded and thus ``hidden'' for the user. In many situations it is not necessary to know anything about computer control or real-time systems to implement a simple controller. There are, however, many situations where the result will be much better when the sampled-data aspects of the system are taken into consideration when the controller is designed. Also, it is very important that the real-time aspects are regarded. The real-time system influences the timing in the computer and can thus minimize latency and delays in the feedback controller. The paper introduces different aspects of computer-controlled systems from simple approximation of continuous time controllers to design aspects of optimal sampled-data controllers. We also point out some of the pitfalls of computer control and discusses the practical aspects as well as the implementation issues of computer control. Published as a Professional Briefs by IFAC

    Version-Centric Visualization of Code Evolution

    Get PDF

    Source processes of three aftershocks of the 1983 Goodnow, New York, earthquake: High-resolution images of small, symmetric ruptures

    Get PDF
    Broadband, large dynamic range GEOS data from four aftershocks (M_L ∼ 2 to 3) of the 1983 Goodnow, New York, earthquake, recorded at hard-rock sites 2 to 7 km away from the epicentral area, are used to study rupture processes of three larger events, with an M_L = 1.6 event as the Green's function event. We analyze the spectra and spectral ratios of ground velocity at frequencies up to 100 Hz, and conclude that (1) there are resolvable P-wave f_(max) (51 and 57 Hz) at two sites; (2) there is abundant information on sources of larger events up to frequencies of 50 to 60 Hz; and (3) there is an unstable, nonlinear instrument resonance at about 90 Hz. We analyze the artifacts of low-pass filters in the deconvolved rupture process, including the limited time resolution and biases in the rise-time measurement. Some extensions of the empirical Green's function (EGF) method are proposed to reduce these artifacts and to precisely estimate the relative locations of events that are close both in time and in space. Applying the EGF method to three aftershocks, we find that these events have ruptures that are simple crack-like, characterized by small fault radii (∼ 70 to 120 m) and static stress drops that vary, depending on the size of the events, between about 5 to 16 bars. We also find that two of the events, with origin times differing by 0.60 sec, are separated by 190 ± 110 meters. Assuming a causal relationship between the two would result in a slow propagation velocity (0.05 ± 0.03 time of the local shear-wave velocity). We therefore interpret the corresponding ruptures as being distinct, the area between the rupturing patches having a characteristic length much smaller than those of the rupturing patches. Comparison of the results of this study with those obtained for the Goodnow main shock and microearthquakes in California and Hawaii suggests that the stress drops of the Goodnow aftershocks decrease considerably (by up to a factor of 25 or more) from those estimated for the main shock, even after the estimated large uncertainty in the latter estimates is considered. This decrease is similar to those reported for the Nahanni earthquakes (m_b = 5.0 to 6.5) in Northwest Territories and for the North Palm Springs earthquake (M_L = 5.9) sequence in California. The stress drops of the simple, crack like microearthquakes are significantly variable (by factors between 3 and 27) within the single source areas. The median of stress drops of the intraplate Goodnow aftershocks is lower (by factors of 2 to 4) than the medians calculated for the other, interplate microearthquake sequences

    A Line Based Visualization of Code Evolution

    Get PDF

    Simulators, graphic

    Get PDF
    Includes bibliographical references (pages 1607-1608).There are many situations in which a computer simulation with a graphic display can be very useful in the design of a robotic system. First of all, when a robot is planned for an industrial application, there are many commercially available arms that can be selected. A graphics-based simulation would allow the manufacturing engineer to evaluate alternative choices quickly and easily. The engineer can also use such a simulation tool to design interactively the workcell in which the robot operates and integrate the robot with other systems, such as part feeders and conveyors with which it must closely work. Even before the workcell is assembled or the arm first arrives, the engineer can optimize the placement of the robot with respect to the fixtures it must reach and ensure that the arm is not blocked by supports. By being able to evaluate workcell designs off-line and away from the factory floor, changes can be made without hindering factory production and thus the net productivity of the design effort can be increased
    • …
    corecore