88 research outputs found

    High Quality Alias Free Image Rotation

    Get PDF
    This paper presents new algorithms for the rotation of images. The primary design criteria for these algorithms is very high quality. Common methods for image rotation, including convolutional and separable approaches, are examined and shown to exhibit significant high frequency aliasing problems. A new resampling filter design methodology is presented which minimizes the problem for conventional convolution-based image rotation. The paper also presents a new separable image rotation algorithm which exhibits improved performance in term of reduction in artifacts and an efficient O(N2logN)O(N^{2} log N) running time

    The Bolocam Galactic Plane Survey: Survey Description and Data Reduction

    Get PDF
    We present the Bolocam Galactic Plane Survey (BGPS), a 1.1 mm continuum survey at 33" effective resolution of 170 square degrees of the Galactic Plane visible from the northern hemisphere. The survey is contiguous over the range -10.5 < l < 90.5, |b| < 0.5 and encompasses 133 square degrees, including some extended regions |b| < 1.5. In addition to the contiguous region, four targeted regions in the outer Galaxy were observed: IC1396, a region towards the Perseus Arm, W3/4/5, and Gem OB1. The BGPS has detected approximately 8400 clumps over the entire area to a limiting non-uniform 1-sigma noise level in the range 11 to 53 mJy/beam in the inner Galaxy. The BGPS source catalog is presented in a companion paper (Rosolowsky et al. 2010). This paper details the survey observations and data reduction methods for the images. We discuss in detail the determination of astrometric and flux density calibration uncertainties and compare our results to the literature. Data processing algorithms that separate astronomical signals from time-variable atmospheric fluctuations in the data time-stream are presented. These algorithms reproduce the structure of the astronomical sky over a limited range of angular scales and produce artifacts in the vicinity of bright sources. Based on simulations, we find that extended emission on scales larger than about 5.9' is nearly completely attenuated (> 90%) and the linear scale at which the attenuation reaches 50% is 3.8'. Comparison with other millimeter-wave data sets implies a possible systematic offset in flux calibration, for which no cause has been discovered. This presentation serves as a companion and guide to the public data release through NASA's Infrared Processing and Analysis Center (IPAC) Infrared Science Archive (IRSA). New data releases will be provided through IPAC IRSA with any future improvements in the reduction.Comment: Accepted for publication in Astrophysical Journal Supplemen

    New techniques for the scientific visualization of three-dimensional multi-variate and vector fields

    Get PDF
    Volume rendering allows us to represent a density cloud with ideal properties (single scattering, no self-shadowing, etc.). Scientific visualization utilizes this technique by mapping an abstract variable or property in a computer simulation to a synthetic density cloud. This thesis extends volume rendering from its limitation of isotropic density clouds to anisotropic and/or noisy density clouds. Design aspects of these techniques are discussed that aid in the comprehension of scientific information. Anisotropic volume rendering is used to represent vector based quantities in scientific visualization. Velocity and vorticity in a fluid flow, electric and magnetic waves in an electromagnetic simulation, and blood flow within the body are examples of vector based information within a computer simulation or gathered from instrumentation. Understand these fields can be crucial to understanding the overall physics or physiology. Three techniques for representing three-dimensional vector fields are presented: Line Bundles, Textured Splats and Hair Splats. These techniques are aimed at providing a high-level (qualitative) overview of the flows, offering the user a substantial amount of information with a single image or animation. Non-homogenous volume rendering is used to represent multiple variables. Computer simulations can typically have over thirty variables, which describe properties whose understanding are useful to the scientist. Trying to understand each of these separately can be time consuming. Trying to understand any cause and effect relationships between different variables can be impossible. NoiseSplats is introduced to represent two or more properties in a single volume rendering of the data. This technique is also aimed at providing a qualitative overview of the flows

    Separable Image Warping with Spatial Lookup Tables

    Get PDF
    Image warping refers to the 2-D resampling of a source image onto a target image. In the general case, this requires costly 2-D filtering operations. Simplifications are possible when the warp can be expressed as a cascade of orthogonall-D transformations. In these cases, separable transformations have been introduced to realize large performance gains. The central ideas in this area were formulated in the 2-pass algorithm by Catmull and Smith. Although that method applies over an important class of transformations, there are intrinsic problems which limit its usefulness. The goal of this work is to extend the 2-pass approach to handle arbitrary spatial mapping functions. We address the difficulties intrinsic to 2-pass scanline algorithms: bottlenecking, foldovers, and the lack of closed-form inverse solutions. These problems are shown to be resolved in a general, efficient, separable technique, with graceful degradation for transformations of increasing complexity

    A high performance vector rendering pipeline

    Get PDF
    Vector images are images which encode visible surfaces of a 3D scene, in a resolution independent format. Prior to this work generation of such an image was not real time. As such the benefits of using them in the graphics pipeline were not fully expressed. In this thesis we propose methods for addressing the following questions. How can we introduce vector images into the graphics pipeline, namingly, how can we produce them in real time. How can we take advantage of resolution independence, and how can we render vector images to a pixel display as efficiently as possible and with the highest quality. There are three main contributions of this work. We have designed a real time vector rendering system. That is, we present a GPU accelerated pipeline which takes as an input a scene with 3D geometry, and outputs a vector image. We call this system SVGPU: Scalable Vector Graphics on the GPU. As mentioned vector images are resolution independent. We have designed a cloud pipeline for streaming vector images. That is, we present system design and optimizations for streaming vector images across interconnection networks, which reduces the bandwidth required for transporting real time 3D content from server to client. Lastly, in this thesis we introduce another added benefit of vector images. We have created a method for rendering them with the highest possible quality. That is, we have designed a new set of operations on vector images, which allows us to anti-alias them during rendering to a canonical 2D image. Our contributions provide the system design, optimizations, and algorithms required to bring vector image utilization and benefits much closer to the real time graphics pipeline. Together they form an end to end pipeline to this purpose, i.e. "A High Performance Vector Rendering Pipeline.

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Neuronal encoding of natural imagery in dragonfly motion pathways

    Get PDF
    Vision is the primary sense of humans and most other animals. While the act of seeing seems easy, the neuronal architectures that underlie this ability are some of the most complex of the brain. Insects represent an excellent model for investigating how vision operates as they often lead rich visual lives while possessing relatively simple brains. Among insects, aerial predators such as the dragonfly face additional survival tasks. Not only must aerial predators successfully navigate three-dimensional visual environments, they must also be able to identify and track their prey. This task is made even more difficult due to the complexity of visual scenes that contain detail on all scales of magnification, making the job of the predator particularly challenging. Here I investigate the physiology of neurons accessible through tracts in the third neuropil of the optic lobe of the dragonfly. It is at this stage of processing that the first evidence of both wide-field motion and object detection emerges. My research extends the current understanding of two main pathways in the dragonfly visual system, the wide-field motion pathway and target-tracking pathway. While wide-field motion pathways have been studied in numerous insects, until now the dragonfly wide-field motion pathway remains unstudied. Investigation of this pathway has revealed properties, novel among insects, specifically the purely optical adaptation to motion at both high and low velocities through motion adaptation. Here I characterise these newly described neurons and investigate their adaptation properties. The dragonfly target-tracking pathway has been studied extensively, but most research has focussed on classical stimuli such as gratings and small black objects moving on white monitors. Here I extend previous research, which characterised the behaviour of target tracking neurons in cluttered environments, developing a paradigm to allow numerous properties of targets to be changed while still measuring tracking performance. I show that dragonfly neurons interact with clutter through the previously discovered selective attention system, treating cluttered scenes as collections of target-like features. I further show that this system uses the direction and speed of the target and background as one of the key parameters for tracking success. I also elucidate some additional properties of selective attention including the capacity to select for inhibitory targets or weakly salient features in preference to strongly excitatory ones. In collaboration with colleagues, I have also performed some limited modelling to demonstrate that a selective attention model, which includes switching best explains experimental data. Finally, I explore a mathematical model called divisive normalisation which may partially explain how neurons with large receptive fields can be used to re-establish target position information (lost in a position invariant system) through relatively simple integrations of multiple large receptive field neurons. In summary, my thesis provides a broad investigation into several questions about how dragonflies can function in natural environments. More broadly, my thesis addresses general questions about vision and how complicated visual tasks can be solved via clever strategies employed in neuronal systems and their modelled equivalents.Thesis (Ph.D.) -- University of Adelaide, Adelaide Medical School, 201

    Reconstruction of undersampled signals and alignment in the frequency domain

    Get PDF
    Imperial Users onl
    corecore