667 research outputs found

    Highly parallel computation

    Get PDF
    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed

    Automated CNC Tool Path Planning and Machining Simulation on Highly Parallel Computing Architectures

    Get PDF
    This work has created a completely new geometry representation for the CAD/CAM area that was initially designed for highly parallel scalable environment. A methodology was also created for designing highly parallel and scalable algorithms that can use the developed geometry representation. The approach used in this work is to move parallel algorithm design complexity from an algorithm level to a data representation level. As a result the developed methodology allows an easy algorithm design without worrying too much about the underlying hardware. However, the developed algorithms are still highly parallel because the underlying geometry model is highly parallel. For validation purposes, the developed methodology and geometry representation were used for designing CNC machine simulation and tool path planning algorithms. Then these algorithms were implemented and tested on a multi-GPU system. Performance evaluation of developed algorithms has shown great parallelizability and scalability; and that main algorithm properties are required for modern highly parallel environment. It was also proved that GPUs are capable of performing work an order of magnitude faster than traditional central processors. The last part of the work demonstrates how high performance that comes with highly parallel hardware can be used for development of a next level of automated CNC tool path planning systems. As a proof of concept, a fully automated tool path planning system capable of generating valid G-code programs for 5-axis CNC milling machines was developed. For validation purposes, the developed system was used for generating tool paths for some parts and results were used for machining simulation and experimental machining. Experimental results have proved from one side that the developed system works. And from another side, that highly parallel hardware brings computational resources for algorithms that were not even considered before due to computational requirements, but can provide the next level of automation for modern manufacturing systems

    Highly-Parallel, Highly-Compact Computing Structures Implemented in Nanotechnology

    Get PDF
    In this paper, we describe work in which we are evaluating how the evolving properties of nano-electronic devices could best be utilized in highly parallel computing structures. Because of their combination of high performance, low power, and extreme compactness, such structures would have obvious applications in spaceborne environments, both for general mission control and for on-board data analysis. However, the anticipated properties of nano-devices mean that the optimum architecture for such systems is by no means certain. Candidates include single instruction multiple datastream (SIMD) arrays, neural networks, and multiple instruction multiple datastream (MIMD) assemblies

    Automated Tool Selection and Tool Path Planning for Free-Form Surfaces in 3-Axis CNC Milling using Highly Parallel Computing Architecture

    Get PDF
    This research presents a methodology to automatically select cutters and generate tool paths for all stages in 3-axis CNC Milling of free-form surfaces. Tools are selected and tool paths are planned in order to minimize the total machining time. A generalized cutter geometry model is used to define available cutters and an arbitrary milling surface is initially defined by a triangular mesh. The decisions made by process engineers in selecting cutting geometry and generating tool paths for milling dramatically influence the final result. Often, the resulting tool path is non-optimal, because the engineers cannot consider all the available information. However, making these decisions can be delegated to a computing system that can find a better result. The developed methodology selects the cutters to use for milling from the set of all available cutters, assigns milling zones to every selected cutter, based on its performance, and builds iso-scallop and contour parallel tool paths for every cutter and its milling zone. After generating all tool paths for both milling stages (rough milling and finishing), the tool selection sequence is defined and all the tool paths for one tool are connected into the single tool path. The tool paths should be connected in the best possible manner in order to minimize the time of CNC non-cutting motions. This is similar to the travelling salesman problem with constraints. A heuristics solution is provided here. At the end, the total machining time for one tool set is calculated. Finally, the set of cutters used is changed to minimize the total machining time. A digital, voxel-based model is used to represent a workpiece and the available tools. This model is selected so that the algorithms is simpler and they can be easily paralleled for thousands of computing cores. The parallel processing framework is implemented to work with multiple graphics processing units. Tool paths generated from this framework are post-processed into G-code and the representative part is machined

    Comparison of Bayesian and particle swarm algorithms for hyperparameter optimisation in machine learning applications in high energy physics

    Full text link
    When using machine learning (ML) techniques, users typically need to choose a plethora of algorithm-specific parameters, referred to as hyperparameters. In this paper, we compare the performance of two algorithms, particle swarm optimisation (PSO) and Bayesian optimisation (BO), for the autonomous determination of these hyperparameters in applications to different ML tasks typical for the field of high energy physics (HEP). Our evaluation of the performance includes a comparison of the capability of the PSO and BO algorithms to make efficient use of the highly parallel computing resources that are characteristic of contemporary HEP experiments.Comment: Accepted by Computer Physics Communications. Changes made compared to previous version: added references to other strategies, added Zenodo entry for the implemented software, added a brief description of PSO, added more explanations regarding the benchmark task

    A Decade of Neural Networks: Practical Applications and Prospects

    Get PDF
    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization

    Visualization of flow fields in the web platform

    Get PDF
    Visualization of vector fields plays an important role in research activities nowadays -- Web applications allow a fast, multi-platform and multi-device access to data, which results in the need of optimized applications to be implemented in both high-performance and low-performance devices -- Point trajectory calculation procedures usually perform repeated calculations due to the fact that several points might lie over the same trajectory -- This paper presents a new methodology to calculate point trajectories over highly-dense and uniformly-distributed grid of points in which the trajectories are forced to lie over the points in the grid -- Its advantages rely on a highly parallel computing architecture implementation and in the reduction of the computational effort to calculate the stream paths since unnecessary calculations are avoided, reusing data through iterations -- As case study, the visualization of oceanic currents through in the web platform is presented and analyzed, using WebGL as the parallel computing architecture and the rendering Application Programming Interfac

    Reconfigurable video coding: a stream programming approach to the specification of new video coding standards

    Get PDF
    International audienceCurrent video coding standards, and their reference implementations, are architected as large monolithic and sequential algorithms, in spite of the considerable overlap of functionality between standards, and the fact that they are frequently implemented on highly parallel computing platforms. The former leads to unnecessary complexity in the standardization process, while the latter implies that implementations have to be rebuilt from the ground up to reflect the parallel nature of the target. The upcoming Reconfigurable Video Coding (RVC) standard currently developed at MPEG attempts to address these issues by building a framework that supports the construction of video standards as libraries of coding tools. These libraries can be incrementally updated and extended, and the tools in them can be aggregated to form complete codecs using a streaming (or dataflow) programming model, which preserves the inherent parallelism of the coding algorithm. This paper presents the RVC framework and its underlying data flow programming model, along with the tool support and initial results

    Visualisation of flow fields in the web platform

    Get PDF
    Visualization of vector fields plays an important role in research activities nowadays. Web applications allow a fast, multi-platform and multi-device access to data, which results in the need of optimized applications to be implemented in both high and low-performance devices. The computation of trajectories usually repeats calculations due to the fact that several points might lie over the same trajectory. This paper presents a new methodology to calculate point trajectories over a highly-dense and uniformly-distributed grid of points in which the trajectories are forced to lie over the points in the grid. Its advantages rely on a highly parallel computing implementation and in the reduction of the computational effort to calculate the stream paths since unnecessary calculations are avoided by reusing data through iterations. As case study, the visualization of oceanic streams in the web platform is presented and analyzed, using WebGL as the parallel computing architecture and the rendering engine
    corecore