2,144 research outputs found

    A Framework for Megascale Agent Based Model Simulations on Graphics Processing Units

    Get PDF
    Agent-based modeling is a technique for modeling dynamic systems from the bottom up. Individual elements of the system are represented computationally as agents. The system-level behaviors emerge from the micro-level interactions of the agents. Contemporary state-of-the-art agent-based modeling toolkits are essentially discrete-event simulators designed to execute serially on the Central Processing Unit (CPU). They simulate Agent-Based Models (ABMs) by executing agent actions one at a time. In addition to imposing an un-natural execution order, these toolkits have limited scalability. In this article, we investigate data-parallel computer architectures such as Graphics Processing Units (GPUs) to simulate large scale ABMs. We have developed a series of efficient, data parallel algorithms for handling environment updates, various agent interactions, agent death and replication, and gathering statistics. We present three fundamental innovations that provide unprecedented scalability. The first is a novel stochastic memory allocator which enables parallel agent replication in O(1) average time. The second is a technique for resolving precedence constraints for agent actions in parallel. The third is a method that uses specialized graphics hardware, to gather and process statistical measures. These techniques have been implemented on a modern day GPU resulting in a substantial performance increase. We believe that our system is the first ever completely GPU based agent simulation framework. Although GPUs are the focus of our current implementations, our techniques can easily be adapted to other data-parallel architectures. We have benchmarked our framework against contemporary toolkits using two popular ABMs, namely, SugarScape and StupidModel.GPGPU, Agent Based Modeling, Data Parallel Algorithms, Stochastic Simulations

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Using image morphing for memory-efficient impostor rendering on GPU

    Get PDF
    Real-time rendering of large animated crowds consisting thousands of virtual humans is important for several applications including simulations, games and interactive walkthroughs; but cannot be performed using complex polygonal models at interactive frame rates. For that reason, several methods using large numbers of pre-computed image-based representations, which are called as impostors, have been proposed. These methods take the advantage of existing programmable graphics hardware to compensate the computational expense while maintaining the visual fidelity. Making the number of different virtual humans, which can be rendered in real-time, not restricted anymore by the required computational power but by the texture memory consumed for the variety and discretization of their animations. In this work, we proposed an alternative method that reduces the memory consumption by generating compelling intermediate textures using image-morphing techniques. In order to demonstrate the preserved perceptual quality of animations, where half of the key-frames were rendered using the proposed methodology, we have implemented the system using the graphical processing unit and obtained promising results at interactive frame rates

    Hardware-accelerated interactive data visualization for neuroscience in Python.

    Get PDF
    Large datasets are becoming more and more common in science, particularly in neuroscience where experimental techniques are rapidly evolving. Obtaining interpretable results from raw data can sometimes be done automatically; however, there are numerous situations where there is a need, at all processing stages, to visualize the data in an interactive way. This enables the scientist to gain intuition, discover unexpected patterns, and find guidance about subsequent analysis steps. Existing visualization tools mostly focus on static publication-quality figures and do not support interactive visualization of large datasets. While working on Python software for visualization of neurophysiological data, we developed techniques to leverage the computational power of modern graphics cards for high-performance interactive data visualization. We were able to achieve very high performance despite the interpreted and dynamic nature of Python, by using state-of-the-art, fast libraries such as NumPy, PyOpenGL, and PyTables. We present applications of these methods to visualization of neurophysiological data. We believe our tools will be useful in a broad range of domains, in neuroscience and beyond, where there is an increasing need for scalable and fast interactive visualization

    Real-time Shadows for Gigapixel Displacement Maps

    Get PDF
    Shadows portray helpful information in scenes. From a scientific visualization standpoint, they help to add data without unnecessary clutter. In video games they add realism and depth. In common graphics pipelines, due to the independent and parallel rendering of geometric primitives, shadows are difficult to achieve. Objects require knowledge of each other and therefore multiple renders are needed to collect the necessary data. The collection of this data comes with its own set of trade offs. Our research involves adding shadows into a lunar rendering framework developed by Dr. Robert Kooima. The NASA-collected data contains a multi-gigapixel displacement map describing the lunar topology. This map does not fit entirely into main memory and therefore out-of-core paging is utilized to achieve real-time speeds. Current shadow techniques do not attempt to generate occluder data on such a scale, and therefore we have developed a novel approach to fit this situation. By using a chain of pre-processing steps, we analyze the structure of the displacement map and calculate horizon lines at each vertex. This information is saved into several images and used to generate shadows in a single pass, maintaining real-time speeds. The algorithm is even capable of generating soft shadows without extra information or loss of speed. We compare our algorithm with common approaches in the field as well as two forms of ground truth; one from ray tracing and the other from the gigapixel lunar texture data, showing real shadows at the time it was collected

    A fast framework construction and visualization method for particle-based fluid

    Get PDF
    © 2017, The Author(s). Fast and vivid fluid simulation and visualization is a challenge topic of study in recent years. Particle-based simulation method has been widely used in the art animation modeling and multimedia field. However, the requirements of huge numerical calculation and high quality of visualization usually result in a poor computing efficiency. In this work, in order to improve those issues, we present a fast framework for 3D fluid fast constructing and visualization which parallelizes the fluid algorithm based on the GPU computing framework and designs a direct surface visualization method for particle-based fluid data such as WCSPH, IISPH, and PCISPH. Considering on conventional polygonization or adaptive mesh methods may incur high computing costs and detail losses, an improved particle-based method is provided for real-time fluid surface rendering with the screen-space technology and the utilities of the modern graphics hardware to achieve the high performance rendering; meanwhile, it effectively protects fluid details. Furthermore, to realize the fast construction of scenes, an optimized design of parallel framework and interface is also discussed in our paper. Our method is convenient to enforce, and the results demonstrate a significant improvement in the performance and efficiency by being compared with several examples
    corecore