9,970 research outputs found

    SPH Simulations with Reconfigurable Hardware Accelerator

    Full text link
    We present a novel approach to accelerate astrophysical hydrodynamical simulations. In astrophysical many-body simulations, GRAPE (GRAvity piPE) system has been widely used by many researchers. However, in the GRAPE systems, its function is completely fixed because specially developed LSI is used as a computing engine. Instead of using such LSI, we are developing a special purpose computing system using Field Programmable Gate Array (FPGA) chips as the computing engine. Together with our developed programming system, we have implemented computing pipelines for the Smoothed Particle Hydrodynamics (SPH) method on our PROGRAPE-3 system. The SPH pipelines running on PROGRAPE-3 system have the peak speed of 85 GFLOPS and in a realistic setup, the SPH calculation using one PROGRAPE-3 board is 5-10 times faster than the calculation on the host computer. Our results clearly shows for the first time that we can accelerate the speed of the SPH simulations of a simple astrophysical phenomena using considerable computing power offered by the hardware.Comment: 27 pages, 13 figures, submitted to PAS

    Communication costs in a multi-tiered MPSoC

    Get PDF
    The amount of digital processing required for phased array beamformers is very large. It requires many parallel processors, which can be organized in a multi-tiered structure. Communication costs differ for each of the stages in such an architecture. For example, communication costs from the antenna front-end to the first processing stages is costly because of the amount of connections and data rate. Furthermore there is a trade-off between sequential processing exploiting locality of reference versus exploiting parallelism but adding communication costs. Thus, the optimal architecture depends on the importance that is given to the different measures.\ud \ud A model is presented to determine the partitioning of a (beamforming) system based on communication costs. It is shown that different solutions can be explored based on the cost model and the incorporated quantitative and qualitative measures. Determining the importance of each measure is subjective to the situation and application. In this work a simple beamforming application is used optimised for energy efficiency

    A Fast General-Purpose Clustering Algorithm Based on FPGAs for High-Throughput Data Processing

    Full text link
    We present a fast general-purpose algorithm for high-throughput clustering of data "with a two dimensional organization". The algorithm is designed to be implemented with FPGAs or custom electronics. The key feature is a processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the readout, without suffering from combinatorial delays due to looping multiple times through all the data. This feature makes this algorithm especially well suited for problems where the data has high density, e.g. in the case of tracking devices working under high-luminosity condition such as those of LHC or Super-LHC. The algorithm is organized in two steps: the first step (core) clusters the data; the second step analyzes each cluster of data to extract the desired information. The current algorithm is developed as a clustering device for modern high-energy physics pixel detectors. However, the algorithm has much broader field of applications. In fact, its core does not specifically rely on the kind of data or detector it is working for, while the second step can and should be tailored for a given application. Applications can thus be foreseen to other detectors and other scientific fields ranging from HEP calorimeters to medical imaging. An additional advantage of this two steps approach is that the typical clustering related calculations (second step) are separated from the combinatorial complications of clustering. This separation simplifies the design of the second step and it enables it to perform sophisticated calculations achieving online-quality in online applications. The algorithm is general purpose in the sense that only minimal assumptions on the kind of clustering to be performed are made.Comment: 11th Frontier Detectors For Frontier Physics conference (2009

    A review of High Performance Computing foundations for scientists

    Full text link
    The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and discuss essential aspects to take into account when running scientific calculations in computers.Comment: 33 page

    Proceedings of the third French-Ukrainian workshop on the instrumentation developments for HEP

    Full text link
    The reports collected in these proceedings have been presented in the third French-Ukrainian workshop on the instrumentation developments for high-energy physics held at LAL, Orsay on October 15-16. The workshop was conducted in the scope of the IDEATE International Associated Laboratory (LIA). Joint developments between French and Ukrainian laboratories and universities as well as new proposals have been discussed. The main topics of the papers presented in the Proceedings are developments for accelerator and beam monitoring, detector developments, joint developments for large-scale high-energy and astroparticle physics projects, medical applications.Comment: 3rd French-Ukrainian workshop on the instrumentation developments for High Energy Physics, October 15-16, 2015, LAL, Orsay, France, 94 page
    • 

    corecore