4,248 research outputs found

    Spacelab system analysis Marshall Avionics System Testbed (MAST)

    Get PDF
    A synopsis of the visits to avionics test facilities is presented. A list of recommendaions for the MAST facility is also included

    NASA's supercomputing experience

    Get PDF
    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed

    Adaptation of a GPU simulator for modern architectures

    Get PDF
    GPUs have evolved quite radically during the last ten years, providing improvements in the areas of performance, power consumption, memory, and programmability, increasing interest in them. This increase in interest, especially in academic research into GPU architecture, has led to the creation of the widely used GPGPU-Sim, a GPU simulator for general purpose computation workloads. The simulation models currently available for simulation are based on older architectures, and as new GPU architectures have been introduced, GPGPU-Sim has not been updated to model them. This project attempts to model a more modern GPU, the Maxwell based GeForce GTX Titan X. This is accomplished by modifying the existing configuration files for one of the older simulation models. The changes made to the configuration files include changing the GPU\u27s organization, updating the clock domains, and increasing cache and memory sizes. To test the accuracy of the model, eleven GPGPU programs, some having multiple kernels, were chosen to be executed by the model and by the physical hardware, and compared using IPC as the metric. While for some of the kernels the model performed within 16% of the GeForce GTX Titan X, there were an equal number of kernels for which the model performed either much faster or much slower than the hardware. It is suspected that the cases for which the model performed much faster were ones in which either the hardware executed single precision instructions as double precision instructions, or the hardware ran an entirely different machine code for the same kernel than the model. The cases for which the model performed much slower are suspected to be due to the fact that the Maxwell memory subsystem cannot currently be accurately modeled in GPGPU-Sim

    Power Efficiency for Software Algorithms running on Graphics Processors

    Get PDF
    Abstract in UndeterminedPower efficiency has become the most important consideration for many modern computing devices. In this paper, we examine power efficiency of a range of graphics algorithms on different GPUs. To measure power consumption, we have built a power measuring device that samples currents at a high frequency. Comparing power efficiency of different graphics algorithms is done by measuring power and performance of three different primary rendering algorithms and three different shadow algorithms. We measure these algorithms’ power signatures on a mobile phone, on an integrated CPU and graphics processor, and on high-end discrete GPUs, and then compare power efficiency across both algorithms and GPUs. Our results show that power efficiency is not always proportional to rendering performance and that, for some algorithms, power efficiency varies across different platforms. We also show that for some algorithms, energy efficiency is similar on all platforms

    Many is beautiful : commoditization as a source of disruptive innovation

    Get PDF
    Thesis (S.M.M.O.T.)--Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 2003.Includes bibliographical references (leaves 44-45).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.The expression "disruptive technology" is now firmly embedded in the modern business lexicon. The mental model summarized by this concise phrase has great explanatory power for ex-post analysis of many revolutionary changes in business. Unfortunately, this paradigm can rarely be applied prescriptively. The classic formulation of a "disruptive technology" sheds little light on potential sources of innovation. This thesis seeks to extend this analysis by suggesting that many important disruptive technologies arise from commodities. The sudden availability of a high performance factor input at a low price often enables innovation in adjacent market segments. The thesis suggests main five reasons that commodities spur innovation: ** The emergence of a commodity collapses competition to the single dimension of price. Sudden changes in factor prices create new opportunities for supply driven innovation. Low prices enable innovators to substitute quantity for quality. ** The price / performance curve of a commodity creates an attractor that promotes demand aggregation. ** Commodities emerge after the establishment of a dominant design. Commodities have defined and stable interfaces. Well developed tool sets and experienced developer communities are available to work with commodities, decreasing the price of experimentation. ** Distributed architectures based on large number of simple, redundant components offer more predictable performance. Systems based on a small number of high performance components will have a higher standard deviation for uptime than high granularity systems based on large numbers of low power components. ** Distributed architectures are much more flexible than low granularity systems. Large integrated facilities often provide cost advantages when operating at the Minimum Efficient Scale of production. However, distributed architectures that can efficiently change production levels over time may be a superior solution based on the ability to adapt to changing market demand patterns. The evolution of third generation bus architectures in personal computers provides a comprehensive example of commodity based disruption, incorporating all five forces.by Richard Ellert Willey.S.M.M.O.T
    corecore