219 research outputs found

    Performance Analysis And Optimal Utilization Of Inter-Process Communications On A Commodity Cluster

    Get PDF
    Classical science is based on theory, observation and physical experimentation. Contemporary science is characterized by theory, observation, experimentation and numerical simulation. With the use of hardware and software we can simulate lots of phenomenon. This saves time, money and physical resources. Simulation of a certain phenomenon requires lots of computing power. Answer to these computational power needs is high performance computer. High performance computers consist of numerous processors working on same task in parallel. In the past, high performance computers were very expensive and affordable by few institutions. After Message Passing Interface library is ported to PC platform, commodity clusters can be built of inexpensive PCs and afforded by any researcher. Lots of performance analyses have been conducted on high-end supercomputers. None has been done on commodity clusters. In this thesis, experiments for six major MPI communication functions were performed on eight different configurations of clusters. Performance analyses were then conducted on the results. Based on the results, methods for optimal utilization of inter-process communications on commodity clusters were proposed

    Aviation Safety/Automation Program Conference

    Get PDF
    The Aviation Safety/Automation Program Conference - 1989 was sponsored by the NASA Langley Research Center on 11 to 12 October 1989. The conference, held at the Sheraton Beach Inn and Conference Center, Virginia Beach, Virginia, was chaired by Samuel A. Morello. The primary objective of the conference was to ensure effective communication and technology transfer by providing a forum for technical interchange of current operational problems and program results to date. The Aviation Safety/Automation Program has as its primary goal to improve the safety of the national airspace system through the development and integration of human-centered automation technologies for aircraft crews and air traffic controllers

    Advances in Time-Domain Electromagnetic Simulation Capabilities Through the Use of Overset Grids and Massively Parallel Computing

    Get PDF
    A new methodology is presented for conducting numerical simulations of electromagnetic scattering and wave propagation phenomena. Technologies from several scientific disciplines, including computational fluid dynamics, computational electromagnetics, and parallel computing, are uniquely combined to form a simulation capability that is both versatile and practical. In the process of creating this capability, work is accomplished to conduct the first study designed to quantify the effects of domain decomposition on the performance of a class of explicit hyperbolic partial differential equations solvers; to develop a new method of partitioning computational domains comprised of overset grids; and to provide the first detailed assessment of the applicability of overset grids to the field of computational electromagnetics. Furthermore, the first Finite Volume Time Domain (FVTD) algorithm capable of utilizing overset grids on massively parallel computing platforms is developed and implemented. Results are presented for a number of scattering and wave propagation simulations conducted using this algorithm, including two spheres in close proximity and a finned missile

    Where have all the flowers gone?: a modular systems perspective of IT infrastructure design and productivity

    Get PDF
    Assessing value of IT infrastructure investments has been both difficult and ambiguous. This research develops and tests a conceptual framework to understand the productivity process. A lagged and recursive framework is used to trace the relationship between IT infrastructure investments, infrastructure design, and organizational productivity along with contingencies of IT management and the environment. A major contribution of this study is the use of the systems perspective to disaggregate the concepts of IT infrastructure and productivity into collectively exhaustive types. Findings reveal that IT investments do not significant affect productivity but do so when used to develop an IT infrastructure design. IT management is seen to strongly influence IT infrastructure design. Similarly, organizational environment appears to significantly influence the type of productivity focus for a firm. The study adds to the existing body of knowledge through a holistic investigation of the multi-level relationship between IT infrastructure configurations, contingencies, and productivity

    Cluster Computing Review

    Get PDF
    In the past decade there has been a dramatic shift from mainframe or ‘host−centric’ computing to a distributed ‘client−server’ approach. In the next few years this trend is likely to continue with further shifts towards ‘network−centric’ computing becoming apparent. All these trends were set in motion by the invention of the mass−reproducible microprocessor by Ted Hoff of Intel some twenty−odd years ago. The present generation of RISC microprocessors are now more than a match for mainframes in terms of cost and performance. The long−foreseen day when collections of RISC microprocessors assembled together as a parallel computer could out perform the vector supercomputers has finally arrived. Such high−performance parallel computers incorporate proprietary interconnection networks allowing low−latency, high bandwidth inter−processor communications. However, for certain types of applications such interconnect optimization is unnecessary and conventional LAN technology is sufficient. This has led to the realization that clusters of high−performance workstations can be realistically used for a variety of applications either to replace mainframes, vector supercomputers and parallel computers or to better manage already installed collections of workstations. Whilst it is clear that ‘cluster computers’ have limitations, many institutions and companies are exploring this option. Software to manage such clusters is at an early stage of development and this report reviews the current state−of−the−art. Cluster computing is a rapidly maturing technology that seems certain to play an important part in the ‘network−centric’ computing future

    Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    Get PDF
    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment

    Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    Get PDF
    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership
    • …
    corecore