147 research outputs found

    Analysing Astronomy Algorithms for GPUs and Beyond

    Full text link
    Astronomy depends on ever increasing computing power. Processor clock-rates have plateaued, and increased performance is now appearing in the form of additional processor cores on a single chip. This poses significant challenges to the astronomy software community. Graphics Processing Units (GPUs), now capable of general-purpose computation, exemplify both the difficult learning-curve and the significant speedups exhibited by massively-parallel hardware architectures. We present a generalised approach to tackling this paradigm shift, based on the analysis of algorithms. We describe a small collection of foundation algorithms relevant to astronomy and explain how they may be used to ease the transition to massively-parallel computing architectures. We demonstrate the effectiveness of our approach by applying it to four well-known astronomy problems: Hogbom CLEAN, inverse ray-shooting for gravitational lensing, pulsar dedispersion and volume rendering. Algorithms with well-defined memory access patterns and high arithmetic intensity stand to receive the greatest performance boost from massively-parallel architectures, while those that involve a significant amount of decision-making may struggle to take advantage of the available processing power.Comment: 10 pages, 3 figures, accepted for publication in MNRA

    Simultaneous whole-animal 3D-imaging of neuronal activity using light field microscopy

    Get PDF
    3D functional imaging of neuronal activity in entire organisms at single cell level and physiologically relevant time scales faces major obstacles due to trade-offs between the size of the imaged volumes, and spatial and temporal resolution. Here, using light-field microscopy in combination with 3D deconvolution, we demonstrate intrinsically simultaneous volumetric functional imaging of neuronal population activity at single neuron resolution for an entire organism, the nematode Caenorhabditis elegans. The simplicity of our technique and possibility of the integration into epi-fluoresence microscopes makes it an attractive tool for high-speed volumetric calcium imaging.Comment: 25 pages, 7 figures, incl. supplementary informatio

    Real-time Image Generation for Compressive Light Field Displays

    Get PDF
    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.United States. Defense Advanced Research Projects Agency. Soldier Centric Imaging via Computational CamerasNational Science Foundation (U.S.) (Grant IIS-1116452)United States. Defense Advanced Research Projects Agency. Maximally scalable Optical Sensor Array Imaging with Computation ProgramAlfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (Young Faculty Award

    A Distributed GPU-based Framework for real-time 3D Volume Rendering of Large Astronomical Data Cubes

    Full text link
    We present a framework to interactively volume-render three-dimensional data cubes using distributed ray-casting and volume bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core CPU. The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128 GPUs. The framework proved to be scalable to render a 204 GB data cube with an average of 30 frames per second. Our performance analyses also compare between using NVIDIA Tesla 1060 and 2050 GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, and the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order 3D data sets is a requirement.Comment: 13 Pages, 7 figures, has been accepted for publication in Publications of the Astronomical Society of Australi

    CAVE Size Matters: Effects of Screen Distance and Parallax on Distance Estimation in Large Immersive Display Setups

    Get PDF
    International audienceWhen walking within a CAVE-like system, accommodation distance, parallax and angular resolution vary according to the distance between the user and the projection walls which can alter spatial perception. As these systems get bigger, there is a need to assess the main factors influencing spatial perception in order to better design immersive projection systems and virtual reality applications. Such analysis is key for application domains which require the user to explore virtual environments by moving through the physical interaction space. In this article we present two experiments which analyze distance perception when considering the distance towards the projection screens and parallax as main factors. Both experiments were conducted in a large immersive projection system with up to ten meter interaction space. The first experiment showed that both the screen distance and parallax have a strong asymmetric effect on distance judgments. We observed increased underestimation for positive parallax conditions and slight distance overestimation for negative and zero parallax conditions. The second experiment further analyzed the factors contributing to these effects and confirmed the observed effects of the first experiment with a high-resolution projection setup providing twice the angular resolution and improved accommodative stimuli. In conclusion, our results suggest that space is the most important characteristic for distance perception, optimally requiring about 6 to 7-meter distance around the user, and virtual objects with high demands on accurate spatial perception should be displayed at zero or negative parallax

    Symmetric Photography: Exploiting Data-sparseness in Reflectance Fields

    No full text
    Figure 1: The reflectance field of a glass full of gummy bears is captured using two coaxial projector/camera pairs placed 120 ◦ apart. (a) is the result of synthetically relighting the scene from the front projector, which is coaxial with the presented view, with a high resolution “SIGGRAPH ” matte. Note that due to their sub-surface scattering property, even a single beam of light that falls on a gummy bear illuminates it completely. In (b) we simulate homogeneous backlighting from the second projector. (c) combines (a) and (b). For validation, a ground-truth image (d) was captured by loading the same projector patterns into the real projectors. Our approach is able to faithfully capture and reconstruct the complex light transport in this scene. We present a novel technique called symmetric photography to cap-ture real world reflectance fields. The technique models the 8D re-flectance field as a transport matrix between the 4D incident light field and the 4D exitant light field. It is a challenging task to ac-quire a full transport matrix due to its sheer size. We observe that the transport matrix is data-sparse and symmetric. This symmetry enables us to measure the light transport from two sides simultane-ously, from the illumination directions and the view directions. Thi

    Direct Volume Rendering from Photographic Data

    No full text
    . Direct volume rendering from photographic volume data has the potential to create realistic images of internal volume structure, as well as the structure of boundaries within the volume. While possession of the photographic volume simplies color calculations in voxel illumination, it complicates opacity calculation. This paper describes a framework for addressing illumination challenges in photographic volume data and presents initial results. 1 Introduction In recent years, a few photographic volume data sets have become available. The most widely used of these are those of the Visible Human Project (VHP) at the National Library of Medicine[15], but other examples are being created as well. This type of data oers exciting possibilities for realistic volume visualization, since correct color values are known for each voxel. Applications include medical illustration, surgical simulation, and general scientic education. Photographic volume data also oers a challenge to trad..
    • …
    corecore