60,978 research outputs found
EVALUATION OF PRESENTATION GRAPHICS FOR THE AGRICULTURAL SCIENCES
Professional-looking text and graphic slides enable an audience to comprehend the main ideas of a presentation more quickly. With the advent of easy-to-use graphic software packages and the affordability of personal computer hardware to run this software, researchers may now prepare their own slides or transparencies. This paper describes basic graphic software design and offers criteria for selection of an appropriate software package for scientific research presentations. Comparisons between two prototype graphics packages, Harvard Graphics and SAS/Graph, are made on the basis of the following selection criteria: (1) basic software design, (2) available hardware, (3) output device drivers, (4) available statistical graphics, and (5) data import/export facilities. Graphic style is also addressed here with sample graphs illustrating a current popular theory of visual discrimination
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
Manual for the Fish Population Surveys (DOC9 Package) for the District Fisheries Analysis System (FAS)
Update of Aquatic Biology Technical Report 87/11; final report of project F-69-R (1-3),
Data Base Management and Analysis of Fisheries in ImpoundmentsReport issued on: issued October 1990INHS Technical Report prepared for Illinois Department of Conservatio
Understanding Next-Generation VR: Classifying Commodity Clusters for Immersive Virtual Reality
Commodity clusters offer the ability to deliver higher performance computer graphics at lower prices than traditional graphics supercomputers. Immersive virtual reality systems demand notoriously high computational requirements to deliver adequate real-time graphics, leading to the emergence of commodity clusters for immersive virtual reality. Such clusters deliver the graphics power needed by leveraging the combined power of several computers to meet the demands of real-time interactive immersive computer graphics.However, the field of commodity cluster-based virtual reality is still in early stages of development and the field is currently adhoc in nature and lacks order. There is no accepted means for comparing approaches and implementers are left with instinctual or trial-and-error means for selecting an approach.This paper provides a classification system that facilitates understanding not only of the nature of different clustering systems but also the interrelations between them. The system is built from a new model for generalized computer graphics applications, which is based on the flow of data through a sequence of operations over the entire context of the application. Prior models and classification systems have been too focused in context and application whereas the system described here provides a unified means for comparison of works within the field
Advanced Architectures for Astrophysical Supercomputing
Astronomers have come to rely on the increasing performance of computers to
reduce, analyze, simulate and visualize their data. In this environment, faster
computation can mean more science outcomes or the opening up of new parameter
spaces for investigation. If we are to avoid major issues when implementing
codes on advanced architectures, it is important that we have a solid
understanding of our algorithms. A recent addition to the high-performance
computing scene that highlights this point is the graphics processing unit
(GPU). The hardware originally designed for speeding-up graphics rendering in
video games is now achieving speed-ups of in general-purpose
computation -- performance that cannot be ignored. We are using a generalized
approach, based on the analysis of astronomy algorithms, to identify the
optimal problem-types and techniques for taking advantage of both current GPU
hardware and future developments in computing architectures.Comment: 4 pages, 1 figure, to appear in the proceedings of ADASS XIX, Oct 4-8
2009, Sapporo, Japan (ASP Conf. Series
First Evaluation of the CPU, GPGPU and MIC Architectures for Real Time Particle Tracking based on Hough Transform at the LHC
Recent innovations focused around {\em parallel} processing, either through
systems containing multiple processors or processors containing multiple cores,
hold great promise for enhancing the performance of the trigger at the LHC and
extending its physics program. The flexibility of the CMS/ATLAS trigger system
allows for easy integration of computational accelerators, such as NVIDIA's
Tesla Graphics Processing Unit (GPU) or Intel's \xphi, in the High Level
Trigger. These accelerators have the potential to provide faster or more energy
efficient event selection, thus opening up possibilities for new complex
triggers that were not previously feasible. At the same time, it is crucial to
explore the performance limits achievable on the latest generation multicore
CPUs with the use of the best software optimization methods. In this article, a
new tracking algorithm based on the Hough transform will be evaluated for the
first time on a multi-core Intel Xeon E5-2697v2 CPU, an NVIDIA Tesla K20c GPU,
and an Intel \xphi\ 7120 coprocessor. Preliminary time performance will be
presented.Comment: 13 pages, 4 figures, Accepted to JINS
GPU-based Streaming for Parallel Level of Detail on Massive Model Rendering
Rendering massive 3D models in real-time has long been recognized as a very challenging problem because of the limited computational power and memory space available in a workstation. Most existing rendering techniques, especially level of detail (LOD) processing, have suffered from their sequential execution natures, and does not scale well with the size of the models. We present a GPU-based progressive mesh simplification approach which enables the interactive rendering of large 3D models with hundreds of millions of triangles. Our work contributes to the massive rendering research in two ways. First, we develop a novel data structure to represent the progressive LOD mesh, and design a parallel mesh simplification algorithm towards GPU architecture. Second, we propose a GPU-based streaming approach which adopt a frame-to-frame coherence scheme in order to minimize the high communication cost between CPU and GPU. Our results show that the parallel mesh simplification algorithm and GPU-based streaming approach significantly improve the overall rendering performance
Recommended from our members
The dynamics of computerization in a social science research team : a case study of infrastructure, strategies, and skills
This paper examines the dynamics of Computerization in a PC-oriented research group through a case study. The time and skill in integrating computing into the labor processes of research are often significant "hidden costs" of computerization. Computing infrastructure plays a key role in reducing these costs may be enhanced by careful organization. We illustrate computerization strategies that we have found to be productive and unproductive. Appropriate computerization strategies depend as much on the structuring of resources and interests in the larger social setting, as on a technical characterization of tasks
- …