1,386 research outputs found

    Computers for Lattice Field Theories

    Full text link
    Parallel computers dedicated to lattice field theories are reviewed with emphasis on the three recent projects, the Teraflops project in the US, the CP-PACS project in Japan and the 0.5-Teraflops project in the US. Some new commercial parallel computers are also discussed. Recent development of semiconductor technologies is briefly surveyed in relation to possible approaches toward Teraflops computers.Comment: 15 pages with 16 PS figures, review presented at Lattice 93, LaTeX (espcrc2.sty required

    A comparison of airborne and ground-based radar observations with rain gages during the CaPE experiment

    Get PDF
    The vicinity of KSC, where the primary ground truth site of the Tropical Rainfall Measuring Mission (TRMM) program is located, was the focal point of the Convection and Precipitation/Electrification (CaPE) experiment in Jul. and Aug. 1991. In addition to several specialized radars, local coverage was provided by the C-band (5 cm) radar at Patrick AFB. Point measurements of rain rate were provided by tipping bucket rain gage networks. Besides these ground-based activities, airborne radar measurements with X- and Ka-band nadir-looking radars on board an aircraft were also recorded. A unique combination data set of airborne radar observations with ground-based observations was obtained in the summer convective rain regime of central Florida. We present a comparison of these data intending a preliminary validation. A convective rain event was observed simultaneously by all three instrument types on the evening of 27 Jul. 1991. The high resolution aircraft radar was flown over convective cells with tops exceeding 10 km and observed reflectivities of 40 to 50 dBZ at 4 to 5 km altitude, while the low resolution surface radar observed 35 to 55 dBZ echoes and a rain gage indicated maximum surface rain rates exceeding 100 mm/hr. The height profile of reflectivity measured with the airborne radar show an attenuation of 6.5 dB/km (two way) for X-band, corresponding to a rainfall rate of 95 mm/hr

    NASA high performance computing and communications program

    Get PDF
    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 100-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientist's abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects as well as summaries of individual research and development programs within each project

    Clinical characterization of 66 patients with congenital retinal disease due to the deep-intronic c.2991+1655A>G mutation in CEP290

    Get PDF
    Purpose: To describe the phenotypic spectrum of retinal disease caused by the c.2991+1655A>G mutation in CEP290 and to compare disease severity between homozygous and compound heterozygous patients. Methods: Medical records were reviewed for best-corrected visual acuity (BCVA), age of onset, fundoscopy descriptions. Foveal outer nuclear layer (ONL) and ellipsoid zone (EZ) presence was assessed using spectral-domain optical coherence tomography (SD-OCT). Differences between compound heterozygous and homozygous patients were analyzed based on visual performance and visual development. Results: A total of 66 patients were included. The majority of patients had either light perception or no light perception. In the remaining group of 14 patients, median BCVA was 20/195 Snellen (0.99 LogMAR; range 0.12-1.90) for the right eye, and 20/148 Snellen (0.87 LogMAR; range 0.22-1.90) for the left. Homozygous patients tended to be more likely to develop light perception compared to more severely affected compound heterozygous patients (P = 0.080) and are more likely to improve from no light perception to light perception (P = 0.022) before the age of 6 years. OCT data were available in 12 patients, 11 of whom had retained foveal ONL and EZ integrity up to 48 years (median 23 years) of age. Conclusions: Homozygous patients seem less severely affected compared to their compound-heterozygous peers. Improvement of visual function may occur in the early years of life, suggesting a time window for therapeutic intervention up to the approximate age of 17 years. This period may be extended by an intact foveal ONL and EZ on OCT

    Implementation of ADI: Schemes on MIMD parallel computers

    Get PDF
    In order to simulate the effects of the impingement of hot exhaust jets of High Performance Aircraft on landing surfaces a multi-disciplinary computation coupling flow dynamics to heat conduction in the runway needs to be carried out. Such simulations, which are essentially unsteady, require very large computational power in order to be completed within a reasonable time frame of the order of an hour. Such power can be furnished by the latest generation of massively parallel computers. These remove the bottleneck of ever more congested data paths to one or a few highly specialized central processing units (CPU's) by having many off-the-shelf CPU's work independently on their own data, and exchange information only when needed. During the past year the first phase of this project was completed, in which the optimal strategy for mapping an ADI-algorithm for the three dimensional unsteady heat equation to a MIMD parallel computer was identified. This was done by implementing and comparing three different domain decomposition techniques that define the tasks for the CPU's in the parallel machine. These implementations were done for a Cartesian grid and Dirichlet boundary conditions. The most promising technique was then used to implement the heat equation solver on a general curvilinear grid with a suite of nontrivial boundary conditions. Finally, this technique was also used to implement the Scalar Penta-diagonal (SP) benchmark, which was taken from the NAS Parallel Benchmarks report. All implementations were done in the programming language C on the Intel iPSC/860 computer

    Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    Get PDF
    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost

    Enhancing aeropropulsion research with high-speed interactive computing

    Get PDF
    NASA-Lewis has committed to a long range goal of creating a numerical test cell for aeropropulsion research and development. Efforts are underway to develop a first generation Numerical Propulsion System Simulation (NPSS). The NPSS will provide a unique capability to numerically simulate advanced propulsion systems from nose to tail. Two essential ingredients to the NPSS are: (1) experimentally validated Computational Fluid Dynamics (CFD) codes; and (2) high performing computing systems (hardware and software) that will permit those codes to be used efficiently. To this end, NASA-Lewis is using high speed, interactive computing as a means for achieving Integrated CFD and Experiments (ICE). The development is described of a prototype ICE system for multistage compressor flow physics research
    corecore