194 research outputs found

    NASA'S information technology activities for the 90's

    Get PDF
    The Office of Aeronautics, Exploration and Technology (OAET) is completing an extensive assessment of its nearly five hundred million dollars of proposed space technology development work. The budget is divided into four segments which are as follows: (1) the base research and technology program; (2) the Civil Space Technology Initiative (CSTI); (3) the Exploration Technology Program (ETP); and (4) the High Performance Computing Initiative (HPCI). The programs are briefly discussed in the context of Astrotech 21

    EPCC's Exascale journey: a retrospective of the past 10 years and a vision of the future

    Get PDF

    A Petaflops Era Computing Analysis

    Get PDF
    This report covers a study of the potential for petaflops (1O(exp 15) floating point operations per second) computing. This study was performed within the year 1996 and should be considered as the first step in an on-going effort. 'Me analysis concludes that a petaflop system is technically feasible but not feasible with today's state-of-the-art. Since the computer arena is now a commodity business, most experts expect that a petaflops system will evolve from current technology in an evolutionary fashion. To meet the price expectations of users waiting for petaflop performance, great improvements in lowering component costs will be required. Lower power consumption is also a must. The present rate of progress in improved performance places the date of introduction of petaflop systems at about 2010. Several years before that date, it is projected that the resolution limit of chips will reach the now known resolution limit. Aside from the economic problems and constraints, software is identified as the major problem. The tone of this initial study is more pessimistic than most of the Super-published material available on petaflop systems. Workers in the field are expected to generate more data which could serve to provide a basis for a more informed projection. This report includes an annotated bibliography

    Basic Issues and Current Status of Parallel Computing -- 1995

    Get PDF
    The best enterprises have both a compelling need pulling them forward and an innovative technological solution pushing them on. In high-performance computing, we have the need for increased computational power in many applications and the inevitable long-term solution is massive parallelism. In the short term, the relation between pull and push may seem unclear as novel algorithms and software are needed to support parallel computing. However, eventually parallelism will be present in all computers -- including those in your children\u27s video game, your personal computer or workstation, and the central supercomputer

    Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1

    Get PDF
    The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization

    A History of High-Performance Computing

    Get PDF
    Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems

    Analysis of the Basic Implementation Aspects of Hardware-Accelerated Density Functional Theory Calculations

    Get PDF
    This paper presents a Field Programmable Gate Array (FPGA) implementation of a calculation module for exponential part of Gaussian Type Orbital (GTO). The module is composed of several specially crafted floating-point modules which are fully pipelined and optimized for high performance. The hardware implementation revealed significant speed-up for the finite sum of the exponential products calculation ranging from 2.5x to 20x in comparison to a general-purpose Central Processing Unit (CPU) version. Calculating values of GTOs is one of computationally critical parts of the Kohn-Sham algorithm. The approach proposed in the paper aims to increase the performance of a part of the quantum chemistry computational system by employing FPGA-based accelerator. Several issues are addressed, such as identification of code fragments which benefit most from hardware acceleration, porting a part of the Kohn-Sham algorithm to FPGA, data precision adjustment and data transfer overhead. The authors' intention was also to make hardware implementation of calculating the orbital function universal and easily attachable to different quantum-chemistry software packages

    Infrastructure Plan for ASC Petascale Environments

    Full text link

    Investigation of reduced hypercube (RH) networks : embedding and routing capabilities

    Get PDF
    The choice of a topology for the interconnection of resources in a distributed-memory parallel computing system is a major design decision. The direct binary hypercube has been widely used for this purpose due to its low diameter and its ability to efficiently emulate other important structures. The aforementioned strong properties of the hypercube come at the cost of high VLSI complexity due to the increase in the number of communication ports and channels per node with an increase in the total number of nodes. The reduced hypercube (RH) topology, which is obtained by a uniform reduction in the number of links for each hypercube node, yields lower complexity interconnection networks compared to hypercubes with the same number of nodes, thus permitting the construction of larger parallel systems. Furthermore, it has been shown that the RH at a lower cost achieves performance comparable to that of a regular hypercube with the same number of nodes. A very important issue for the viability of the RH is to investigate the efficiency of embedding frequently used topologies into it. This thesis proposes embedding algorithms for three very important topologies, namely the ring, the torus and the binary tree. The performance of the proposed algorithms is analyzed and compared to that of equivalent embedding algorithms for the regular hypercube. It is shown that these topologies are emulated efficiently on the RH. Additionally, two already proposed routing algorithms for the RH are evaluated through simulation results
    corecore