177 research outputs found
Performance Improvements for the ATLAS Detector Simulation Framework
Many physics and performance studies carried out with the ATLAS detector at the Long Hadron Collider (LHC) require very large event samples. A detailed simulation for the detector, however, requires a great amount of CPU resources. In addition to detailed simulation, fast techniques and new setups are developed and extensively used to supply large event samples. In addition to the development of new techniques and setups, it is still possible to find some performance improvements in the existing simulation technologies.
This work shows some possible ways to increase the performance for different full and fast ATLAS detector simulation setups, using new libraries and code improvements in the ATLAS detector simulation framework. Besides of the improvements, measured time consumptions of different setups are shown and possible further improvements are the other main focuses of this project
Systematically Exploring High-Performance Representations of Vector Fields Through Compile-Time Composition
We present a novel benchmark suite for implementations of vector fields in high-performance computing environments to aid developers in quantifying and ranking their performance. We decompose the design space of such benchmarks into access patterns and storage backends, the latter of which can be further decomposed into components with different functional and non-functional properties. Through compile-time meta-programming, we generate a large number of benchmarks with minimal effort and ensure the extensibility of our suite. Our empirical analysis, based on real-world applications in high-energy physics, demonstrates the feasibility of our approach on CPU and GPU platforms, and highlights that our suite is able to evaluate performance-critical design choices. Finally, we propose that our work towards composing vector fields from elementary components is not only useful for the purposes of benchmarking, but that it naturally gives rise to a novel library for implementing such fields in domain applications.</p
Potentiality of automatic parameter tuning suite available in ACTS track reconstruction software framework
Particle tracking is among the most sophisticated and complex part of the
full event reconstruction chain. A number of reconstruction algorithms work in
a sequence to build these trajectories from detector hits. These algorithms use
many configuration parameters that need to be fine-tuned to properly account
for the detector/experimental setup, the available CPU budget and the desired
physics performance. The most popular method to tune these parameters is
hand-tuning using brute-force techniques. These techniques can be inefficient
and raise issues for the long-term maintainability of such algorithms. The
open-source track reconstruction software framework known as "A Common Tracking
Framework (ACTS)" offers an alternative solution to these parameter tuning
techniques through the use of automatic parameter optimization algorithms. ACTS
comes equipped with an auto-tuning suite that provides necessary setup for
performing optimization of input parameters belonging to track reconstruction
algorithms. The user can choose the tunable parameters in a flexible way and
define a cost/benefit function for optimizing the full reconstruction chain.
The fast execution speed of ACTS allows the user to run several iterations of
optimization within a reasonable time bracket. The performance of these
optimizers has been demonstrated on different track reconstruction algorithms
such as trajectory seed reconstruction and selection, particle vertex
reconstruction and generation of simplified material map, and on different
detector geometries such as Generic Detector and Open Data Detector (ODD). We
aim to bring this approach to all aspects of trajectory reconstruction by
having a more flexible integration of tunable parameters within ACTS
Using Evolutionary Algorithms to Find Cache-Friendly Generalized Morton Layouts for Arrays
The layout of multi-dimensional data can have a significant impact on the efficacy of hardware caches and, by extension, the performance of applications. Common multi-dimensional layouts include the canonical row-major and column-major layouts as well as the Morton curve layout. In this paper, we describe how the Morton layout can be generalized to a very large family of multi-dimensional data layouts with widely varying performance characteristics. We posit that this design space can be efficiently explored using a combinatorial evolutionary methodology based on genetic algorithms. To this end, we propose a chromosomal representation for such layouts as well as a methodology for estimating the fitness of array layouts using cache simulation. We show that our fitness function correlates to kernel running time in real hardware, and that our evolutionary strategy allows us to find candidates with favorable simulated cache properties in four out of the eight real-world applications under consideration in a small number of generations. Finally, we demonstrate that the array layouts found using our evolutionary method perform well not only in simulated environments but that they can effect significant performance gains -- up to a factor ten in extreme cases -- in real hardware
Finding Morton-Like Layouts for Multi-Dimensional Arrays Using Evolutionary Algorithms
The layout of multi-dimensional data can have a significant impact on the
efficacy of hardware caches and, by extension, the performance of applications.
Common multi-dimensional layouts include the canonical row-major and
column-major layouts as well as the Morton curve layout. In this paper, we
describe how the Morton layout can be generalized to a very large family of
multi-dimensional data layouts with widely varying performance characteristics.
We posit that this design space can be efficiently explored using a
combinatorial evolutionary methodology based on genetic algorithms. To this
end, we propose a chromosomal representation for such layouts as well as a
methodology for estimating the fitness of array layouts using cache simulation.
We show that our fitness function correlates to kernel running time in real
hardware, and that our evolutionary strategy allows us to find candidates with
favorable simulated cache properties in four out of the eight real-world
applications under consideration in a small number of generations. Finally, we
demonstrate that the array layouts found using our evolutionary method perform
well not only in simulated environments but that they can effect significant
performance gains -- up to a factor ten in extreme cases -- in real hardware
Altered Serum IgG Levels to a-Synuclein in Dementia with Lewy Bodies and Alzheimer’s Disease.
Natural self-reactive antibodies in the peripheral blood may play a considerable role in the control of potentially toxic proteins that may otherwise accumulate in the aging brain. The significance of serum antibodies reactive against asynuclein is not well known. We explored serum IgG levels to monomeric a-synuclein in dementia with Lewy bodies (DLB) and Alzheimer’s disease (AD) with a novel and validated highly sensitive ELISA assay. Antibody levels revealed stark differences in patients compared to healthy subjects and were dependent on diagnosis, disease duration and age. Anti-asynuclein IgG levels were increased in both patient groups, but in early DLB to a much greater extent than in AD. Increased antibody levels were most evident in younger patients, while with advanced age relatively low levels were observed, similar to healthy individuals, exhibiting stable antibody levels independent of age. Our data show the presence of differentially altered IgG levels against a-synuclein in DLB and AD, which may relate to a disturbed a-synuclein homeostasis triggered by the disease process. These observations may foster the development of novel, possibly preclinical biomarkers and immunotherapeutic strategies that target a-synuclein in neurodegenerative disease.Fil: Koehler, Niklas. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Stransky, Elke. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Shing, Mona. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Gaertner, Susanne. Department of Psychiatry and Psychotherapy, EBERHARD-KARLS-UNIVERSITY;Fil: Meyer, Mirjam. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Schreitmueller, Brigitte. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Leyhe, Thomas. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Laske, Cristoph. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Maetzler, Walter. Department of Neurodegeneration. HERTIE INSTITUTE FOR CLINICAL BRAIN RESEARCH;Fil: Kahle, Philipp. FUNCTIONAL NEUROGENETICS. HERTIE INSTITUTE FOR CLINICAL;Fil: Celej, Maria Soleda. MAX-PLANCK-INSTITUTE FOR BIOPHYSICAL CHEMISTRY; Consejo Nacional de Invest.cientif.y Tecnicas. Centro Cientifico Tecnol.conicet - Cordoba. Centro de Invest.en Qca.biol.de Cordoba (p);Fil: Jovin, Thomas M.. MAX-PLANCK-INSTITUTE FOR BIOPHYSICAL CHEMISTRY;Fil: Fallgatter, Andreas. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Batra, Anil. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Buchkremer, Gherard. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Schott, Klauss. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY;Fil: Richartz-Salzburger, Elke. Department of Psychiatry and Psychotherapy. EBERHARD-KARLS-UNIVERSITY
Potentiality of automatic parameter tuning suite available in ACTS track reconstruction software framework
Particle tracking is among the most sophisticated and complex part of the full event reconstruction chain. A number of reconstruction algorithms work in a sequence to build these trajectories from detector hits. Each of these algorithms use many configuration parameters that need to be fine-tuned to properly account for the detector/experimental setup, the available CPU budget and the desired physics performance. Few examples of such parameters include the cut values limiting the search space of the algorithm, the approximations accounting for complex phenomena or the parameters controlling algorithm performance. The most popular method to tune these parameters is hand-tuning using brute-force techniques. These techniques can be inefficient and raise issues for the long-term maintainability of such algorithms. The opensource track reconstruction software framework known as “A Common Tracking Framework (ACTS)” offers an alternative solution to these parameter tuning techniques through the use of automatic parameter optimization algorithms. ACTS come equipped with an auto-tuning suite that provides necessary setup for performing optimization of input parameters belonging to track reconstruction algorithms. The user can choose the tunable parameters in a flexible way and define a cost/benefit function for optimizing the full reconstruction chain. The fast execution speed of ACTS allows the user to run several iterations of optimization within a reasonable time bracket. The performance of these optimizers has been demonstrated on different track reconstruction algorithms such as trajectory seed reconstruction and selection, particle vertex reconstruction and generation of simplified material map, and on different detector geometries such as Generic Detector and Open Data Detector (ODD). We aim to bring this approach to all aspects of trajectory reconstruction by having a more flexible integration of tunable parameters within ACTS
TrackML high-energy physics tracking challenge on Kaggle
The High-Luminosity LHC (HL-LHC) is expected to reach unprecedented collision intensities, which in turn will greatly increase the complexity of tracking within the event reconstruction. To reach out to computer science specialists, a tracking machine learning challenge (TrackML) was set up on Kaggle by a team of ATLAS, CMS, and LHCb physicists tracking experts and computer scientists building on the experience of the successful Higgs Machine Learning challenge in 2014. A training dataset based on a simulation of a generic HL-LHC experiment tracker has been created, listing for each event the measured 3D points, and the list of 3D points associated to a true track.The participants to the challenge should find the tracks in the test dataset, which means building the list of 3D points belonging to each track.The emphasis is to expose innovative approaches, rather than hyper-optimising known approaches. A metric reflecting the accuracy of a model at finding the proper associations that matter most to physics analysis will allow to select good candidates to augment or replace existing algorithms
Track reconstruction at LHC as a collaborative data challenge use case with RAMP
Charged particle track reconstruction is a major component of data-processing in high-energy physics experiments such as those at the Large Hadron Collider (LHC), and is foreseen to become more and more challenging with higher collision rates. A simplified two-dimensional version of the track reconstruction problem is set up on a collaborative platform, RAMP, in order for the developers to prototype and test new ideas. A small-scale competition was held during the Connecting The Dots / Intelligent Trackers 2017 (CTDWIT 2017) workshop. Despite the short time scale, a number of different approaches have been developed and compared along a single score metric, which was kept generic enough to accommodate a summarized performance in terms of both efficiency and fake rates
- …