833 research outputs found

    Numerical solution of the Navier-Stokes equations about three-dimensional configurations: A survey

    Get PDF
    The numerical solution of the Navier-Stokes equations about three-dimensional configurations is reviewed. Formulational and computational requirements for the various Navier-Stokes approaches are examined for typical problems including the viscous flow field solution about a complete aerospace vehicle. Recent computed results, with experimental comparisons when available, are presented to highlight the presentation. The future of Navier-Stokes applications in three-dimensions is seen to be rapidly expanding across a broad front including internal and external flows, and flows across the entire speed regime from incompressible to hypersonic applications. Prospects for the future are described and recommendations for areas of concentrated research are indicated

    Report from the MPP Working Group to the NASA Associate Administrator for Space Science and Applications

    Get PDF
    NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era

    Ensemble Dynamics and Bred Vectors

    Full text link
    We introduce the new concept of an EBV to assess the sensitivity of model outputs to changes in initial conditions for weather forecasting. The new algorithm, which we call the "Ensemble Bred Vector" or EBV, is based on collective dynamics in essential ways. By construction, the EBV algorithm produces one or more dominant vectors. We investigate the performance of EBV, comparing it to the BV algorithm as well as the finite-time Lyapunov Vectors. We give a theoretical justification to the observed fact that the vectors produced by BV, EBV, and the finite-time Lyapunov vectors are similar for small amplitudes. Numerical comparisons of BV and EBV for the 3-equation Lorenz model and for a forced, dissipative partial differential equation of Cahn-Hilliard type that arises in modeling the thermohaline circulation, demonstrate that the EBV yields a size-ordered description of the perturbation field, and is more robust than the BV in the higher nonlinear regime. The EBV yields insight into the fractal structure of the Lorenz attractor, and of the inertial manifold for the Cahn-Hilliard-type partial differential equation.Comment: Submitted to Monthly Weather Revie

    From constraint programming to heterogeneous parallelism

    Get PDF
    The scaling limitations of multi-core processor development have led to a diversification of the processor cores used within individual computers. Heterogeneous computing has become widespread, involving the cooperation of several structurally different processor cores. Central processor (CPU) cores are most frequently complemented with graphics processors (GPUs), which despite their name are suitable for many highly parallel computations besides computer graphics. Furthermore, deep learning accelerators are rapidly gaining relevance. Many applications could profit from heterogeneous computing but are held back by the surrounding software ecosystems. Heterogeneous systems are a challenge for compilers in particular, which usually target only the increasingly marginalised homogeneous CPU cores. Therefore, heterogeneous acceleration is primarily accessible via libraries and domain-specific languages (DSLs), requiring application rewrites and resulting in vendor lock-in. This thesis presents a compiler method for automatically targeting heterogeneous hardware from existing sequential C/C++ source code. A new constraint programming method enables the declarative specification and automatic detection of computational idioms within compiler intermediate representation code. Examples of computational idioms are stencils, reductions, and linear algebra. Computational idioms denote algorithmic structures that commonly occur in performance-critical loops. Consequently, well-designed accelerator DSLs and libraries support computational idioms with their programming models and function interfaces. The detection of computational idioms in their middle end enables compilers to incorporate DSL and library backends for code generation. These backends leverage domain knowledge for the efficient utilisation of heterogeneous hardware. The constraint programming methodology is first derived on an abstract model and then implemented as an extension to LLVM. Two constraint programming languages are designed to target this implementation: the Compiler Analysis Description Language (CAnDL), and the extended Idiom Detection Language (IDL). These languages are evaluated on a range of different compiler problems, culminating in a complete heterogeneous acceleration pipeline integrated with the Clang C/C++ compiler. This pipeline was evaluated on the established benchmark collections NPB and Parboil. The approach was applicable to 10 of the benchmark programs, resulting in significant speedups from 1.26× on “histo” to 275× on “sgemm” when starting from sequential baseline versions. In summary, this thesis shows that the automatic recognition of computational idioms during compilation enables the heterogeneous acceleration of sequential C/C++ programs. Moreover, the declarative specification of computational idioms is derived in novel declarative programming languages, and it is demonstrated that constraint programming on Single Static Assignment intermediate code is a suitable method for their automatic detection

    Computational methods and software systems for dynamics and control of large space structures

    Get PDF
    Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers

    Quantum analogues of classical optimization algorithms.

    Get PDF
    Master of Science in Physics. University of KwaZulu-Natal, Durban 2017.This thesis explores the quantum analogues of algorithms used in mathematical optimization. The thesis focuses primarily on the iterative gradient search algorithm (algorithm for finding the minimum or maximum of a function) and the Newton-Raphson algorithm. The thesis introduces a new quantum gradient algorithm suggested by Professor Thomas Konrad and colleagues and a quantum analogue of the Newton-Raphson Method, a method for finding approximations to the roots or zeroes of a real-valued function. The quantum gradient algorithm and the quantum Newton-Raphson are shown to give a polynomial speed up over their classical analogues

    End-to-end provisioning in multi-domain/multi-layer networks

    Get PDF
    The last decade has seen many advances in high-speed networking technologies. At the Layer 1 fiber-optic level, dense wavelength division multiplexing (DWDM) has seen fast growth in long-haul backbone/metro sectors. At the Layer 1.5 level, revamped next-generation SONET/SDH (NGS) has gained strong traction in the metro space, as a highly flexible sub-rate\u27 aggregation and grooming solution. Meanwhile, ubiquitous Ethernet (Layer 2) and IP (Layer 3) technologies have also seen the introduction of new quality of service (QoS) paradigms via the differentiated services (Diff-Serv) and integrated services (Intserv) frameworks. In recent years, various control provisioning standards have also been developed to provision these new networks, e.g., via efforts within the IETF, ITU-T, and OIF organizations. As these networks technologies gain traction, there is an increasing need to internetwork multiple domains operating at different technology layers, e.g., IP, Ethernet, SONET, DWDM. However, most existing studies have only looked at single domain networks or multiple domains operating at the same technology layer. As a result, there is now a growing level of interest in developing expanded control solutions for multi-domain/multi-layer networks, i.e., IP-SONET-DWDM. Now given the increase in the number of inter-connected domains, it is difficult for a single entity to maintain complete \u27global\u27 information across all domains. Hence, related solutions must pursue a distributed approach to handling multi-domain/multi-layer problem. Namely, key provisions are needed in the area of inter- domain routing, path computation, and signaling. The work in this thesis addresses these very challenges. Namely, a hierarchical routing framework is first developed to incorporate the multiple link types/granularities encountered in different network domains. Commensurate topology abstraction algorithms and update strategies are then introduced to help condense domain level state and propagate global views. Finally, distributed path computation and signaling setup schemes are developed to leverage the condensed global state information and make intelligent connection routing decisions. The work leverages heavily from graph theory concepts and also addresses the inherent distributed grooming dimension of multi-layer networks. The performance of the proposed framework and algorithms is studied using discrete event simulation techniques. Specifically, a range of multi-domain/multi-layer network topologies are designed and tested. Findings show that the propagation of inter-domain tunneled link state has a huge impact on connection blocking performance, lowering inter-domain connection blocking rates by a notable amount. More importantly, these gains are achieved without any notable increase in inter-domain routing loads. Furthermore, the results also show that topology abstraction is most beneficial at lower network load settings, and when used in conjunction with load-balancing routing.\u2

    National Educators' Workshop: Update 1989 Standard Experiments in Engineering Materials Science and Technology

    Get PDF
    Presented here is a collection of experiments presented and demonstrated at the National Educators' Workshop: Update 89, held October 17 to 19, 1989 at the National Aeronautics and Space Administration, Hampton, Virginia. The experiments related to the nature and properties of engineering materials and provided information to assist in teaching about materials in the education community
    corecore