134 research outputs found
The ATLAS Data Acquisition and High Level Trigger system
This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.Facultad de Ciencias Exacta
The ATLAS Data Acquisition and High Level Trigger system
This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.Facultad de Ciencias Exacta
The ATLAS Data Acquisition and High Level Trigger system
This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.Facultad de Ciencias Exacta
Virtualisation of Grid Resources and Prospects of the Measurement of Z Boson Production in Association with Jets at the LHC
At the Large Hadron Collider, a large number of events containing Z bosons will be available enabling the calibration of the absolute jet energy scale for the first time. In this thesis, such a calibration is deduced within the CMS experiment including the investigation of effects from the underlying event and the jet size parameter. In addition, virtualisation of operating systems is applied to increase the load, stability and maintainability of local grid computing infrastructures
Systems and algorithms for low-latency event reconsturction for upgrades of the level-1 triger of the CMS experiment at CERN
With the increasing centre-of-mass energy and luminosity of the Large Hadron Collider
(LHC), the Compact Muon Experiment (CMS) is undertaking upgrades to its triggering system
in order to maintain its data-taking efficiency. In 2016, the Phase-1 upgrade to the CMS Level-
1 Trigger (L1T) was commissioned which required the development of tools for validation of
changes to the trigger algorithm firmware and for ongoing monitoring of the trigger system
during data-taking. A Phase-2 upgrade to the CMS L1T is currently underway, in preparation
for the High-Luminosity upgrade of the LHC (HL-LHC). The HL-LHC environment is expected
to be particularly challenging for the CMS L1T due to the increased number of simultaneous
interactions per bunch crossing, known as pileup. In order to mitigate the effect of pileup, the
CMS Phase-2 Outer Tracker is being upgraded with capabilities which will allow it to provide
tracks to the L1T for the first time.
A key to mitigating pileup is the ability to identify the location and decay products of the signal
vertex in each event. For this purpose, two conventional algorithms have been investigated, with
a baseline being proposed and demonstrated in FPGA hardware. To extend and complement the
baseline vertexing algorithm, Machine Learning techniques were used to evaluate how different
track parameters can be included in the vertex reconstruction process. This work culminated
in the creation of a deep convolutional neural network, capable of both position reconstruction
and association through the intermediate storage of tracks into a z histogram where the optimal
weighting of each track can be learned. The position reconstruction part of this end-to-end model
was implemented and when compared to the baseline algorithm, a 30% improvement on the
vertex position resolution in tt̄ events was observed.Open Acces
Machine learning as a service for high energy physics (MLaaS4HEP): a service for ML-based data analyses
With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community.
The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner.
Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services
Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures
One of the significant shifts of the next-generation computing technologies will certainly be in
the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD
landmark, evolved as a widely deployed BD operating system. Its new features include
federation structure and many associated frameworks, which provide Hadoop 3.x with the
maturity to serve different markets. This dissertation addresses two leading issues involved in
exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely,
(i)Scalability that directly affects the system performance and overall throughput using
portable Docker containers. (ii) Security that spread the adoption of data protection practices
among practitioners using access controls. An Enhanced Mapreduce Environment (EME),
OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker
(BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for
data streaming to the cloud computing are the main contribution of this thesis study
User-Influenced/Machine-Controlled Playback: The variPlay Music App Format for Interactive Recorded Music
This paper concerns itself with an autoethnography of the five-year ‘variPlay’ project. This project drew from three consecutive rounds of research funding to develop an app format that could host both user interactivity to change the sound of recorded music in real-time, and a machine-driven mode that could autonomously remix, playing back a different version of a song upon every listen, or changing part way on user demand. The final funded phase involved commercialization, with the release of three apps using artists from the roster of project partner, Warner Music Group. The concept and operation of the app is discussed, alongside reflection on salient matters such as product development, music production, mastering, and issues encountered through the commercialization itself. The final apps received several thousand downloads around the world, in territories such as France, USA, and Mexico. Opportunities for future development are also presented
- …