6,339 research outputs found
Computational Particle Physics for Event Generators and Data Analysis
High-energy physics data analysis relies heavily on the comparison between
experimental and simulated data as stressed lately by the Higgs search at LHC
and the recent identification of a Higgs-like new boson. The first link in the
full simulation chain is the event generation both for background and for
expected signals. Nowadays event generators are based on the automatic
computation of matrix element or amplitude for each process of interest.
Moreover, recent analysis techniques based on the matrix element likelihood
method assign probabilities for every event to belong to any of a given set of
possible processes. This method originally used for the top mass measurement,
although computing intensive, has shown its power at LHC to extract the new
boson signal from the background.
Serving both needs, the automatic calculation of matrix element is therefore
more than ever of prime importance for particle physics. Initiated in the
eighties, the techniques have matured for the lowest order calculations
(tree-level), but become complex and CPU time consuming when higher order
calculations involving loop diagrams are necessary like for QCD processes at
LHC. New calculation techniques for next-to-leading order (NLO) have surfaced
making possible the generation of processes with many final state particles (up
to 6). If NLO calculations are in many cases under control, although not yet
fully automatic, even higher precision calculations involving processes at
2-loops or more remain a big challenge.
After a short introduction to particle physics and to the related theoretical
framework, we will review some of the computing techniques that have been
developed to make these calculations automatic. The main available packages and
some of the most important applications for simulation and data analysis, in
particular at LHC will also be summarized.Comment: 19 pages, 11 figures, Proceedings of CCP (Conference on Computational
Physics) Oct. 2012, Osaka (Japan) in IOP Journal of Physics: Conference
Serie
Investigating grid computing technologies for use with commercial simulation packages
As simulation experimentation in industry become more computationally demanding, grid computing can be seen as a promising technology that has the potential to bind together the computational resources needed to quickly execute such simulations. To investigate how this might be possible, this paper reviews the grid technologies that can be used together with commercial-off-the-shelf simulation packages (CSPs) used in industry. The paper identifies two specific forms of grid computing (Public Resource Computing and Enterprise-wide Desktop Grid Computing) and the middleware associated with them (BOINC and Condor) as being suitable for grid-enabling existing CSPs. It further proposes three different CSP-grid integration approaches and identifies one of them to be the most appropriate. It is hoped that this research will encourage simulation practitioners to consider grid computing as a technologically viable means of executing CSP-based experiments faster
Machine Learning-Based Elastic Cloud Resource Provisioning in the Solvency II Framework
The Solvency II Directive (Directive 2009/138/EC) is a European Directive issued in November 2009 and effective from January 2016, which has been enacted by the European Union to regulate the insurance and reinsurance sector through the discipline of risk management. Solvency II requires European insurance companies to conduct consistent evaluation and continuous monitoring of risks—a process which is computationally complex and extremely resource-intensive. To this end, companies are required to equip themselves with adequate IT infrastructures, facing a significant outlay.
In this paper we present the design and the development of a Machine Learning-based approach to transparently deploy on a cloud environment the most resource-intensive portion of the Solvency II-related computation. Our proposal targets DISAR®, a Solvency II-oriented system initially designed to work on a grid of conventional computers. We show how our solution allows to reduce the overall expenses associated with the computation, without hampering the privacy of the companies’ data (making it suitable for conventional public cloud environments), and allowing to meet the strict temporal requirements required by the Directive. Additionally, the system is organized as a self-optimizing loop, which allows to use information gathered from actual (useful) computations, thus requiring a shorter training phase. We present an experimental study conducted on Amazon EC2 to assess the validity and the efficiency of our proposal
Many-Task Computing and Blue Waters
This report discusses many-task computing (MTC) generically and in the
context of the proposed Blue Waters systems, which is planned to be the largest
NSF-funded supercomputer when it begins production use in 2012. The aim of this
report is to inform the BW project about MTC, including understanding aspects
of MTC applications that can be used to characterize the domain and
understanding the implications of these aspects to middleware and policies.
Many MTC applications do not neatly fit the stereotypes of high-performance
computing (HPC) or high-throughput computing (HTC) applications. Like HTC
applications, by definition MTC applications are structured as graphs of
discrete tasks, with explicit input and output dependencies forming the graph
edges. However, MTC applications have significant features that distinguish
them from typical HTC applications. In particular, different engineering
constraints for hardware and software must be met in order to support these
applications. HTC applications have traditionally run on platforms such as
grids and clusters, through either workflow systems or parallel programming
systems. MTC applications, in contrast, will often demand a short time to
solution, may be communication intensive or data intensive, and may comprise
very short tasks. Therefore, hardware and software for MTC must be engineered
to support the additional communication and I/O and must minimize task dispatch
overheads. The hardware of large-scale HPC systems, with its high degree of
parallelism and support for intensive communication, is well suited for MTC
applications. However, HPC systems often lack a dynamic resource-provisioning
feature, are not ideal for task communication via the file system, and have an
I/O system that is not optimized for MTC-style applications. Hence, additional
software support is likely to be required to gain full benefit from the HPC
hardware
Towards Lattice Quantum Chromodynamics on FPGA devices
In this paper we describe a single-node, double precision Field Programmable
Gate Array (FPGA) implementation of the Conjugate Gradient algorithm in the
context of Lattice Quantum Chromodynamics. As a benchmark of our proposal we
invert numerically the Dirac-Wilson operator on a 4-dimensional grid on three
Xilinx hardware solutions: Zynq Ultrascale+ evaluation board, the Alveo U250
accelerator and the largest device available on the market, the VU13P device.
In our implementation we separate software/hardware parts in such a way that
the entire multiplication by the Dirac operator is performed in hardware, and
the rest of the algorithm runs on the host. We find out that the FPGA
implementation can offer a performance comparable with that obtained using
current CPU or Intel's many core Xeon Phi accelerators. A possible multiple
node FPGA-based system is discussed and we argue that power-efficient High
Performance Computing (HPC) systems can be implemented using FPGA devices only.Comment: 17 pages, 4 figure
Parallel statistical model checking for safety verification in smart grids
By using small computing devices deployed at user premises, Autonomous Demand Response (ADR) adapts users electricity consumption to given time-dependent electricity tariffs. This allows end-users to save on their electricity bill and Distribution System Operators to optimise (through suitable time-dependent tariffs) management of the electric grid by avoiding demand peaks.
Unfortunately, even with ADR, users power consumption may deviate from the expected (minimum cost) one, e.g., because ADR devices fail to correctly forecast energy needs at user premises. As a result, the aggregated power demand may present undesirable peaks.
In this paper we address such a problem by presenting methods and a software tool (APD-Analyser) implementing them, enabling Distribution System Operators to effectively verify that a given time-dependent electricity tariff achieves the desired goals even when end-users deviate from their expected behaviour.
We show feasibility of the proposed approach through a realistic scenario from a medium voltage Danish distribution network
- …