7,378 research outputs found

    Computations of unsteady multistage compressor flows in a workstation environment

    Get PDF
    High-end graphics workstations are becoming a necessary tool in the computational fluid dynamics environment. In addition to their graphic capabilities, workstations of the latest generation have powerful floating-point-operation capabilities. As workstations become common, they could provide valuable computing time for such applications as turbomachinery flow calculations. This report discusses the issues involved in implementing an unsteady, viscous multistage-turbomachinery code (STAGE-2) on workstations. It then describes work in which the workstation version of STAGE-2 was used to study the effects of axial-gap spacing on the time-averaged and unsteady flow within a 2 1/2-stage compressor. The results included time-averaged surface pressures, time-averaged pressure contours, standard deviation of pressure contours, pressure amplitudes, and force polar plots

    Status and projections of the NAS program

    Get PDF
    NASA's Numerical Aerodynamic Simulation (NAS) Program has completed development of the initial operating configuration of the NAS Processing System Network (NPSN). This is the first milestone in the continuing and pathfinding effort to provide state-of-the-art supercomputing for aeronautics research and development. The NPSN, available to a nation-wide community of remote users, provides a uniform UNIX environment over a network of host computers ranging from the Cray-2 supercomputer to advanced scientific workstations. This system, coupled with a vendor-independent base of common user interface and network software, presents a new paradigm for supercomputing environments. Background leading to the NAS program, its programmatic goals and strategies, technical goals and objectives, and the development activities leading to the current NPSN configuration are presented. Program status, near-term plans, and plans for the next major milestone, the extended operating configuration, are also discussed

    BOSS-LDG: A Novel Computational Framework that Brings Together Blue Waters, Open Science Grid, Shifter and the LIGO Data Grid to Accelerate Gravitational Wave Discovery

    Get PDF
    We present a novel computational framework that connects Blue Waters, the NSF-supported, leadership-class supercomputer operated by NCSA, to the Laser Interferometer Gravitational-Wave Observatory (LIGO) Data Grid via Open Science Grid technology. To enable this computational infrastructure, we configured, for the first time, a LIGO Data Grid Tier-1 Center that can submit heterogeneous LIGO workflows using Open Science Grid facilities. In order to enable a seamless connection between the LIGO Data Grid and Blue Waters via Open Science Grid, we utilize Shifter to containerize LIGO's workflow software. This work represents the first time Open Science Grid, Shifter, and Blue Waters are unified to tackle a scientific problem and, in particular, it is the first time a framework of this nature is used in the context of large scale gravitational wave data analysis. This new framework has been used in the last several weeks of LIGO's second discovery campaign to run the most computationally demanding gravitational wave search workflows on Blue Waters, and accelerate discovery in the emergent field of gravitational wave astrophysics. We discuss the implications of this novel framework for a wider ecosystem of Higher Performance Computing users.Comment: 10 pages, 10 figures. Accepted as a Full Research Paper to the 13th IEEE International Conference on eScienc

    Big Data in Critical Infrastructures Security Monitoring: Challenges and Opportunities

    Full text link
    Critical Infrastructures (CIs), such as smart power grids, transport systems, and financial infrastructures, are more and more vulnerable to cyber threats, due to the adoption of commodity computing facilities. Despite the use of several monitoring tools, recent attacks have proven that current defensive mechanisms for CIs are not effective enough against most advanced threats. In this paper we explore the idea of a framework leveraging multiple data sources to improve protection capabilities of CIs. Challenges and opportunities are discussed along three main research directions: i) use of distinct and heterogeneous data sources, ii) monitoring with adaptive granularity, and iii) attack modeling and runtime combination of multiple data analysis techniques.Comment: EDCC-2014, BIG4CIP-201

    Measuring gravitational waves from binary black hole coalescences: II. the waves' information and its extraction, with and without templates

    Get PDF
    We discuss the extraction of information from detected binary black hole (BBH) coalescence gravitational waves, focusing on the merger phase that occurs after the gradual inspiral and before the ringdown. Our results are: (1) If numerical relativity simulations have not produced template merger waveforms before BBH detections by LIGO/VIRGO, one can band-pass filter the merger waves. For BBHs smaller than about 40 solar masses detected via their inspiral waves, the band pass filtering signal to noise ratio indicates that the merger waves should typically be just barely visible in the noise for initial and advanced LIGO interferometers. (2) We derive an optimized (maximum likelihood) method for extracting a best-fit merger waveform from the noisy detector output; one "perpendicularly projects" this output onto a function space (specified using wavelets) that incorporates our prior knowledge of the waveforms. An extension of the method allows one to extract the BBH's two independent waveforms from outputs of several interferometers. (3) If numerical relativists produce codes for generating merger templates but running the codes is too expensive to allow an extensive survey of the merger parameter space, then a coarse survey of this parameter space, to determine the ranges of the several key parameters and to explore several qualitative issues which we describe, would be useful for data analysis purposes. (4) A complete set of templates could be used to test the nonlinear dynamics of general relativity and to measure some of the binary parameters. We estimate the number of bits of information obtainable from the merger waves (about 10 to 60 for LIGO/VIRGO, up to 200 for LISA), estimate the information loss due to template numerical errors or sparseness in the template grid, and infer approximate requirements on template accuracy and spacing.Comment: 33 pages, Rextex 3.1 macros, no figures, submitted to Phys Rev

    Multigrid calculation of three-dimensional viscous cascade flows

    Get PDF
    A 3-D code for viscous cascade flow prediction was developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full multigrid method. The Baldwin-Lomax eddy viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency

    CLEX: Yet Another Supercomputer Architecture?

    Get PDF
    We propose the CLEX supercomputer topology and routing scheme. We prove that CLEX can utilize a constant fraction of the total bandwidth for point-to-point communication, at delays proportional to the sum of the number of intermediate hops and the maximum physical distance between any two nodes. Moreover, % applying an asymmetric bandwidth assignment to the links, all-to-all communication can be realized (1+o(1))(1+o(1))-optimally both with regard to bandwidth and delays. This is achieved at node degrees of nεn^{\varepsilon}, for an arbitrary small constant ε(0,1]\varepsilon\in (0,1]. In contrast, these results are impossible in any network featuring constant or polylogarithmic node degrees. Through simulation, we assess the benefits of an implementation of the proposed communication strategy. Our results indicate that, for a million processors, CLEX can increase bandwidth utilization and reduce average routing path length by at least factors 1010 respectively 55 in comparison to a torus network. Furthermore, the CLEX communication scheme features several other properties, such as deadlock-freedom, inherent fault-tolerance, and canonical partition into smaller subsystems

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    Military Procurement and Technology Development

    Get PDF
    The purpose of this paper is to demonstrate that military and defense related research and procurement have been a major source of commercial technology development across a broad spectrum of industries that account for an important share of United States industrial production. I discuss the development of five general purpose technologies: (1) military and commercial aircraft, (2) nuclear energy and electric power, (3) computers and semiconductors, (4) the Internet, and (5) the space industries. The defense industrial base has become a smaller share of the industrial sector which is itself a declining sector in the U.S. economy. It is doubtful that military and defense related procurement will again become an important source of new general purpose technologies. When the history of U.S. technology development for the next half century is eventually written it will almost certainly be written within the context of slower productivity growth than the relatively high rates that prevailed in the U.S through the 1960's and during the information technology bubble that began in the early 1990's.Research and Development/Tech Change/Emerging Technologies,
    corecore