642 research outputs found

    The first ICASE/LARC industry roundtable: Session proceedings

    Get PDF
    The first 'ICASE/LaRC Industry Roundtable' was held on October 3-4, 1994, in Williamsburg, Virginia. The main purpose of the roundtable was to draw attention of ICASE/LaRC scientists to industrial research agendas. The roundtable was attended by about 200 scientists, 30% from NASA Langley; 20% from universities; 17% NASA Langley contractors (including ICASE personnel); and the remainder from federal agencies other than NASA Langley. The technical areas covered reflected the major research programs in ICASE and closely associated NASA branches. About 80% of the speakers were from industry. This report is a compilation of the session summaries prepared by the session chairmen

    Rapid-Response Urban CFD Simulations Using a GPU Computing Paradigm on Desktop Supercomputers

    Get PDF
    In the event of chemical or biological (CB) agent attacks or accidents, first-responders need hazard prediction data to launch effective emergency response action. Accurate and timely knowledge of the wind fields in urban areas is critically important to identify and project the extent of CB agent dispersion to determine the hazard-zone. In their 2008 report (GAO-08-180), U.S. Government Accountability Office has reported that first responders are limited in their ability to detect and model hazardous releases in urban environments. The current set of modeling tools for contaminant dispersion in urban environments rely on empirical assumptions with diagnostic equations (Wang et al. 2003, Williams et al. 2004). The main advantage of these models is their relatively fast turn-around times, although their predictive capabilities can be limited. As part of the Joint Effects Model (JEM), funded by the Department of Defense, urban transport and dispersion models have been evaluated for their rapid-response capabilities. As discussed in Heagy et al. (2007), majority of the urban transport and dispersion models considered in the evaluation study fell short of satisfying the JEM key performance parameter of maximum 10-minutes run-time on a desktop computer, and the models that were able to satisfy the performance parameter were employed at low resolutions

    ADDRESSING GEOGRAPHICAL CHALLENGES IN THE BIG DATA ERA UTILIZING CLOUD COMPUTING

    Get PDF
    Processing, mining and analyzing big data adds significant value towards solving previously unverified research questions or improving our ability to understand problems in geographical sciences. This dissertation contributes to developing a solution that supports researchers who may not otherwise have access to traditional high-performance computing resources so they benefit from the “big data” era, and implement big geographical research in ways that have not been previously possible. Using approaches from the fields of geographic information science, remote sensing and computer science, this dissertation addresses three major challenges in big geographical research: 1) how to exploit cloud computing to implement a universal scalable solution to classify multi-sourced remotely sensed imagery datasets with high efficiency; 2) how to overcome the missing data issue in land use land cover studies with a high-performance framework on the cloud through the use of available auxiliary datasets; and 3) the design considerations underlying a universal massive scale voxel geographical simulation model to implement complex geographical systems simulation using a three dimensional spatial perspective. This dissertation implements an in-memory distributed remotely sensed imagery classification framework on the cloud using both unsupervised and supervised classifiers, and classifies remotely sensed imagery datasets of the Suez Canal area, Egypt and Inner Mongolia, China under different cloud environments. This dissertation also implements and tests a cloud-based gap filling model with eleven auxiliary datasets in biophysical and social-economics in Inner Mongolia, China. This research also extends a voxel-based Cellular Automata model using graph theory and develops this model as a massive scale voxel geographical simulation framework to simulate dynamic processes, such as air pollution particles dispersal on cloud

    Management and display of four-dimensional environmental data sets using McIDAS

    Get PDF
    Over the past four years, great strides have been made in the areas of data management and display of 4-D meteorological data sets. A survey was conducted of available and planned 4-D meteorological data sources. The data types were evaluated for their impact on the data management and display system. The requirements were analyzed for data base management generated by the 4-D data display system. The suitability of the existing data base management procedures and file structure were evaluated in light of the new requirements. Where needed, new data base management tools and file procedures were designed and implemented. The quality of the basic 4-D data sets was assured. The interpolation and extrapolation techniques of the 4-D data were investigated. The 4-D data from various sources were combined to make a uniform and consistent data set for display purposes. Data display software was designed to create abstract line graphic 3-D displays. Realistic shaded 3-D displays were created. Animation routines for these displays were developed in order to produce a dynamic 4-D presentation. A prototype dynamic color stereo workstation was implemented. A computer functional design specification was produced based on interactive studies and user feedback

    Ray Tracing Structured AMR Data Using ExaBricks

    Full text link
    Structured Adaptive Mesh Refinement (Structured AMR) enables simulations to adapt the domain resolution to save computation and storage, and has become one of the dominant data representations used by scientific simulations; however, efficiently rendering such data remains a challenge. We present an efficient approach for volume- and iso-surface ray tracing of Structured AMR data on GPU-equipped workstations, using a combination of two different data structures. Together, these data structures allow a ray tracing based renderer to quickly determine which segments along the ray need to be integrated and at what frequency, while also providing quick access to all data values required for a smooth sample reconstruction kernel. Our method makes use of the RTX ray tracing hardware for surface rendering, ray marching, space skipping, and adaptive sampling; and allows for interactive changes to the transfer function and implicit iso-surfacing thresholds. We demonstrate that our method achieves high performance with little memory overhead, enabling interactive high quality rendering of complex AMR data sets on individual GPU workstations

    The Second ICASE/LaRC Industry Roundtable: Session Proceedings

    Get PDF
    The second ICASE/LaRC Industry Roundtable was held October 7-9, 1996 at the Williamsburg Hospitality House, Williamsburg, Virginia. Like the first roundtable in 1994, this meeting had two objectives: (1) to expose ICASE and LaRC scientists to industrial research agendas; and (2) to acquaint industry with the capabilities and technology available at ICASE, LaRC and academic partners of ICASE. Nineteen sessions were held in three parallel tracks. Of the 170 participants, over one third were affiliated with various industries. Proceedings from the different sessions are summarized in this report

    Methods for Multilevel Parallelism on GPU Clusters: Application to a Multigrid Accelerated Navier-Stokes Solver

    Get PDF
    Computational Fluid Dynamics (CFD) is an important field in high performance computing with numerous applications. Solving problems in thermal and fluid sciences demands enormous computing resources and has been one of the primary applications used on supercomputers and large clusters. Modern graphics processing units (GPUs) with many-core architectures have emerged as general-purpose parallel computing platforms that can accelerate simulation science applications substantially. While significant speedups have been obtained with single and multiple GPUs on a single workstation, large problems require more resources. Conventional clusters of central processing units (CPUs) are now being augmented with GPUs in each compute-node to tackle large problems. The present research investigates methods of taking advantage of the multilevel parallelism in multi-node, multi-GPU systems to develop scalable simulation science software. The primary application the research develops is a cluster-ready GPU-accelerated Navier-Stokes incompressible flow solver that includes advanced numerical methods, including a geometric multigrid pressure Poisson solver. The research investigates multiple implementations to explore computation / communication overlapping methods. The research explores methods for coarse-grain parallelism, including POSIX threads, MPI, and a hybrid OpenMP-MPI model. The application includes a number of usability features, including periodic VTK (Visualization Toolkit) output, a run-time configuration file, and flexible setup of obstacles to represent urban areas and complex terrain. Numerical features include a variety of time-stepping methods, buoyancy-drivenflow, adaptive time-stepping, various iterative pressure solvers, and a new parallel 3D geometric multigrid solver. At each step, the project examines performance and scalability measures using the Lincoln Tesla cluster at the National Center for Supercomputing Applications (NCSA) and the Longhorn cluster at the Texas Advanced Computing Center (TACC). The results demonstrate that multi-GPU clusters can substantially accelerate computational fluid dynamics simulations

    Physiological system modelling

    Get PDF
    Computer graphics has a major impact in our day-to-day life. It is used in diverse areas such as displaying the results of engineering and scientific computations and visualization, producing television commercials and feature films, simulation and analysis of real world problems, computer aided design, graphical user interfaces that increases the communication bandwidth between humans and machines, etc Scientific visualization is a well-established method for analysis of data, originating from scientific computations, simulations or measurements. The development and implementation of the 3Dgen software was developed by the author using OpenGL and C language was presented in this report 3Dgen was used to visualize threedimensional cylindrical models such as pipes and also for limited usage in virtual endoscopy. Using the developed software a model was created using the centreline data input by the user or from the output of some other program, stored in a normal text file. The model was constructed by drawing surface polygons between two adjacent centreline points. The software allows the user to view the internal and external surfaces of the model. The software was designed in such a way that it runs in more than one operating systems with minimal installation procedures Since the size of the software is very small it can be stored in a 1 44 Megabyte floppy diskette. Depending on the processing speed of the PC the software can generate models of any length and size Compared to other packages, 3Dgen has minimal input procedures was able to generate models with smooth bends. It has both modelling and virtual exploration features. For models with sharp bends the software generates an overshoot

    The ICARUS white paper. A scalable energy-efficient, solar-powered HPC center based on low power GPUs

    Get PDF
    We present a unique approach for integrating research in High Performance Computing (HPC) as well as photovoltaic (PV) solar farming and battery technologies into a container-based compute center designed for a maximum of energy efficiency, performance and extensibility/scalability. We use NVIDIA Jetson TK1 boards to build a considerably dimensioned cluster of 60 low-power GPUs, attach a 7:5 kWp solar farm and a 8 kWh Lithium-Ion battery power supply and integrate everything into a single-container, standalone housing. We demonstrate the success of our system by evaluating the performance and energy efficiency for common versatile dense and sparse linear algebra kernels as well as a full CFD code. By this work we can show, that with current technology, energy consumption-induced follow-up cost of HPC can be reduced to zero
    corecore