187 research outputs found

    HPCCP/CAS Workshop Proceedings 1998

    Get PDF
    This publication is a collection of extended abstracts of presentations given at the HPCCP/CAS (High Performance Computing and Communications Program/Computational Aerosciences Project) Workshop held on August 24-26, 1998, at NASA Ames Research Center, Moffett Field, California. The objective of the Workshop was to bring together the aerospace high performance computing community, consisting of airframe and propulsion companies, independent software vendors, university researchers, and government scientists and engineers. The Workshop was sponsored by the HPCCP Office at NASA Ames Research Center. The Workshop consisted of over 40 presentations, including an overview of NASA's High Performance Computing and Communications Program and the Computational Aerosciences Project; ten sessions of papers representative of the high performance computing research conducted within the Program by the aerospace industry, academia, NASA, and other government laboratories; two panel sessions; and a special presentation by Mr. James Bailey

    Approximation and Relaxation Approaches for Parallel and Distributed Machine Learning

    Get PDF
    Large scale machine learning requires tradeoffs. Commonly this tradeoff has led practitioners to choose simpler, less powerful models, e.g. linear models, in order to process more training examples in a limited time. In this work, we introduce parallelism to the training of non-linear models by leveraging a different tradeoff--approximation. We demonstrate various techniques by which non-linear models can be made amenable to larger data sets and significantly more training parallelism by strategically introducing approximation in certain optimization steps. For gradient boosted regression tree ensembles, we replace precise selection of tree splits with a coarse-grained, approximate split selection, yielding both faster sequential training and a significant increase in parallelism, in the distributed setting in particular. For metric learning with nearest neighbor classification, rather than explicitly train a neighborhood structure we leverage the implicit neighborhood structure induced by task-specific random forest classifiers, yielding a highly parallel method for metric learning. For support vector machines, we follow existing work to learn a reduced basis set with extremely high parallelism, particularly on GPUs, via existing linear algebra libraries. We believe these optimization tradeoffs are widely applicable wherever machine learning is put in practice in large scale settings. By carefully introducing approximation, we also introduce significantly higher parallelism and consequently can process more training examples for more iterations than competing exact methods. While seemingly learning the model with less precision, this tradeoff often yields noticeably higher accuracy under a restricted training time budget

    ANALYSIS AND APPLICATION OF CAPACITIVE DISPLACEMENT SENSORS TO CURVED SURFACES

    Get PDF
    Capacitive displacement sensors have many applications where non-contact, high precision measurement of a surface is required. Because of their non-contact nature they can easily measure conductive surfaces that are flexible or otherwise unable to be measured using a contact probe. Since the output of the capacitance gage is electrical, data points can be collected quickly and averaged to improve statistics. It is often necessary for capacitive displacement sensors to gage the distance from a curved (non-flat) surface. Although displacements can easily be detected, the calibration of this output can vary considerably from the flat case. Since a capacitance gage is typically factorycalibrated against a flat reference, the experimental output contains errors in both gain and linearity. A series of calibration corrections is calculated for rectifying this output. Capacitance gages are also limited in their overall displacement travel. A support stage is described that, along with control electronics, allow the properties of the capacitance gage to be combined with an interferometer to overcome this displacement limitation. Finally, an application is proposed that would make use of the capacitance sensor and support stage assembly

    Progress Report : 1991 - 1994

    Get PDF

    Structured grid generation for gas turbine combustion systems

    Get PDF
    Commercial pressures to reduce time-scales encourage innovation in the design and analysis cycle of gas turbine combustion systems. The migration of Computational Fluid Dynamics (CFD) from the purview of the specialist into a routine analysis tool is crucial to achieve these reductions and forms the focus of this research. Two significant challenges were identified: reducing the time-scale for creating and solving a CFD prediction and reducing the level of expertise required to perform a prediction. The commercial pressure for the rapid production of CFD predictions, coupled with the desire to reduce the risk associated with adopting a new technology led, following a review of available techniques, to the identification of structured grids as the current optimum methodology. It was decided that the task of geometry definition would be entirely performed within commercial Computer Aided Design (CAD) systems. A critical success factor for this research was the adoption of solid models for the geometry representation. Solids ensure consistency, and accuracy, whilst eliminating the need for the designer to undertake difficult, and time consuming, geometry repair operations. The versatility of parametric CAD systems were investigated on the complex geometry of a combustion system and found to be useful in reducing the overhead in altering the geometry for a CFD prediction. Accurate and robust transfer between CAD and CFD systems was achieved by the use of direct translators. Restricting the geometry definition to solid models allowed a novel two stage grid generator to be developed. In stage one an initial algebraic grid is created. This reduces user interaction to a minimum, by the employment of a series of logical rules based on the solid model to fill in any missing grid boundary condition data. In stage two the quality of the grid is improved by redistributing nodes using elliptical partial differential equations. A unique approach of improving grid quality by simultaneously smoothing both internal and surface grids was implemented. The smoothing operation was responsible for quality, and therefore reduced the level of grid generation expertise required. The successful validation of this research was demonstrated using several test cases including a CFD prediction of a complete combustion system

    Foundations of Software Science and Computation Structures

    Get PDF
    This open access book constitutes the proceedings of the 22nd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conference on Theory and Practice of Software, ETAPS 2019. The 29 papers presented in this volume were carefully reviewed and selected from 85 submissions. They deal with foundational research with a clear significance for software science

    Compilation Optimizations to Enhance Resilience of Big Data Programs and Quantum Processors

    Get PDF
    Modern computers can experience a variety of transient errors due to the surrounding environment, known as soft faults. Although the frequency of these faults is low enough to not be noticeable on personal computers, they become a considerable concern during large-scale distributed computations or systems in more vulnerable environments like satellites. These faults occur as a bit flip of some value in a register, operation, or memory during execution. They surface as either program crashes, hangs, or silent data corruption (SDC), each of which can waste time, money, and resources. Hardware methods, such as shielding or error correcting memory (ECM), exist, though they can be difficult to implement, expensive, and may be limited to only protecting against errors in specific locations. Researchers have been exploring software detection and correction methods as an alternative, commonly trading either overhead in execution time or memory usage to protect against faults. Quantum computers, a relatively recent advancement in computing technology, experience similar errors on a much more severe scale. The errors are more frequent, costly, and difficult to detect and correct. Error correction algorithms like Shor’s code promise to completely remove errors, but they cannot be implemented on current noisy intermediate-scale quantum (NISQ) systems due to the low number of available qubits. Until the physical systems become large enough to support error correction, researchers instead have been studying other methods to reduce and compensate for errors. In this work, we present two methods for improving the resilience of classical processes, both single- and multi-threaded. We then introduce quantum computing and compare the nature of errors and correction methods to previous classical methods. We further discuss two designs for improving compilation of quantum circuits. One method, focused on quantum neural networks (QNNs), takes advantage of partial compilation to avoid recompiling the entire circuit each time. The other method is a new approach to compiling quantum circuits using graph neural networks (GNNs) to improve the resilience of quantum circuits and increase fidelity. By using GNNs with reinforcement learning, we can train a compiler to provide improved qubit allocation that improves the success rate of quantum circuits

    Acta Cybernetica : Volume 19. Number 2.

    Get PDF
    • …
    corecore