6 research outputs found

    Sensor Fusion for Location Estimation Technologies

    No full text
    Location estimation performance is not always satisfactory and improving it can be expensive. The performance of location estimation technology can be increased by refining the existing location estimation technologies. A better way of increasing performance is to use multiple technologies and combine the available data provided by them in order to obtain better results. Also, maintaining one's location privacy while using location estimation technology is a challenge. How can this problem be solved? In order to make it easier to perform sensor fusion on the available data and to speed up development, a flexible framework centered around a component-based architecture was designed. In order to test the performance of location estimation using the proposed sensor fusion framework, the framework and all the necessary components were implemented and tested. In order to solve the location estimation privacy issues, a comprehensive design that considers all aspects of the problem, from the physical aspects of using radio transmissions to communicating and using location data, is proposed. The experimental results of testing the location estimation sensor fusion framework show that by using sensor fusion, the availability of location estimation is always increased and the accuracy is always increased on average. The experimental results also allow the profiling of the sensor fusion framework's time and energy consumption. In the case of time consumption, there is a 0.32% - 17.06% - 5.05% - 77.58% split between results overhead, engine overhead, component communication time and component execution time on an average. The more measurements are gathered by the data gathering components, the more the component execution time increases relative to all the other execution times because component execution time is the only one that increases while the others remain constant

    Sensor fusion for location estimation technologies

    No full text
    Location estimation performance is not always satisfactory and improving it can be expensive. The performance of location estimation technology can be increased by refining the existing location estimation technologies. A better way of increasing performance is to use multiple technologies and combine the available data provided by them in order to obtain better results. Also, maintaining one's location privacy while using location estimation technology is a challenge. How can this problem be solved? In order to make it easier to perform sensor fusion on the available data and to speed up development, a flexible framework centered around a component-based architecture was designed. In order to test the performance of location estimation using the proposed sensor fusion framework, the framework and all the necessary components were implemented and tested. In order to solve the location estimation privacy issues, a comprehensive design that considers all aspects of the problem, from the physical aspects of using radio transmissions to communicating and using location data, is proposed. The experimental results of testing the location estimation sensor fusion framework show that by using sensor fusion, the availability of location estimation is always increased and the accuracy is always increased on average. The experimental results also allow the profiling of the sensor fusion framework's time and energy consumption. In the case of time consumption, there is a 0.32% - 17.06% - 5.05% - 77.58% split between results overhead, engine overhead, component communication time and component execution time on an average. The more measurements are gathered by the data gathering components, the more the component execution time increases relative to all the other execution times because component execution time is the only one that increases while the others remain constant.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Evaluating InfluxDB and ClickHouse database technologies for improvements of the ATLAS operational monitoring data archiving

    No full text
    The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment the Large Hadron Collider (LHC) at CERN currently is composed of a large number of distributed hardware and software components (about 3000 machines and more than 25000 applications) which, in a coordinated manner, provide the data-taking functionality of the overall system. During data taking runs, a huge flow of operational data is produced in order to constantly monitor the system and allow proper detection of anomalies or misbehaviors. The Persistent Back-End for the ATLAS Information System of TDAQ (P-BEAST) is a system based on a custom-built time-series database and it is used to archive and retrieve for applications any operational monitoring data. P-BEAST stores about 18 TB of highly compacted and compressed raw monitoring data per year acquired at 200 kHz average information update rate during ATLAS data taking periods. Since P-BEAST has been put into production, 4 years ago, several promising database technologies for fast access to time-series and column-oriented data have become available. InfluxDB and ClickHouse were the most promising candidates for improving the performance and functionality of the current implementation of P-BEAST. This poster presents a the testing methodology and setup and the first batch of results, along with some preliminary conclusions and further work outlook

    Evaluating InfluxDB and ClickHouse database technologies for improvements of the ATLAS operational monitoring data archiving

    No full text
    The Trigger and Data Acquisition system of the ATLAS experiment at the Large Hadron Collider at CERN is composed of a large number of distributed hardware and software components which provide the data-taking functionality of the overall system. During data taking, huge amounts of operational data are created in order to constantly monitor the system. The Persistent Back-End for the ATLAS Information System of TDAQ (P-BEAST) is a system based on a custom-built time-series database and it is used to archive and retrieve any operational monitoring data for the applications requesting it. P-BEAST stores about 18 TB of highly compacted and compressed raw monitoring data per year. Since P-BEAST's creation, several promising database technologies for fast access to time-series have become available. InfluxDB and ClickHouse were the most promising candidates for improving the performance and functionality of the current implementation of P-BEAST. This paper presents a short description of main features of both technologies and a description of the tests ran on both database systems. Then, the results of the performance testing performed using a subset of archived ATLAS operational monitoring data are presented and compared

    THE CONTROLS AND CONFIGURATION SOFTWARE OF THE ATLAS DATA ACQUISITION SYSTEM FOR LHC RUN 2

    No full text
    The ATLAS experiment at the Large Hadron Collider (LHC) operated very successfully in the years 2008 to 2013, identified as Run 1. It achieved an overall data taking efficiency of 94%, largely constrained by the irreducible dead-time introduced to accommodate the limitations of the detector read-out electronics. Out of the 6% dead-time only about 15% could be attributed to the central trigger and DAQ system, and out of these, a negligible fraction was due to the Control and Configuration sub-system. Despite these achievements, and in order to improve the efficiency of the whole DAQ system in Run 2 (2015-2018), the first long LHC shutdown (2013-2014) was used to carry out a complete revision of the control and configuration software. The goals were three-fold: properly accommodate additional requirements that could not be seamlessly included during steady operation of the system; re-factor software that had been repeatedly modified to include new features, thus becoming less maintainable; seize the opportunity of modernizing software written at the beginning of the years 2000, thus profiting from the rapid evolution in IT technologies. This upgrade was carried out retaining the important constraint of minimally impacting the mode of operation of the system and public APIs, in order to maximize the acceptance of the changes by the large user community. This paper presents, using a few selected examples, how the work was approached and which new technologies were introduced into the ATLAS DAQ system, and how they were performing in course of Run 2. Despite these being specific to this system, many solutions can be considered and adapted to different distributed DAQ systems

    Control and Configuration Software for the ATLAS DAQ system in LHC Run 2

    No full text
    The ATLAS experiment at the Large Hadron Collider (LHC) operated successfully from 2008 to 2018, which included Run 1 (2008-2013), a shutdown period and the Run 2 (2016-2018). In the course of the Run 2, the ATLAS data taking achieved an overall data taking efficiency of 97%, largely constraint by the irreducible dead-time introduced to accommodate the limitations of the detector read-out electronics. Less than 1% of the dead-time could be attributed to the central trigger and DAQ system, and out of these, a negligible fraction was due to the Controls and Configuration sub-system. The first long LHC shutdown (LS1) (2014-2015) was used to carry out a complete revision of the Controls and Configuration software, in order to suitably accommodate additional requirements that could not be seamlessly included during steady operation of the system. As well a refactorization of the software was carried out, software that had been repeatedly modified to include new features becoming less maintainable. Additionally, LS1 was the opportunity of modernizing software written at the beginning of the years 2000, thus profiting from the rapid evolution in IT technologies. This upgrade was carried out retaining the critical constraint of minimally impacting public APIs, and the operation mode of the system, in order to maximize the acceptance of the changes by the large user community. This paper summarizes and illustrates, at hand of a few selected examples, how the work was approached and which new technologies were introduced into the ATLAS DAQ system and were used in the course of the LHC Run 2. Despite these being specific to the system, many solutions can be considered and adapted to different distributed DAQ systems. Additionally, this paper will focus on the behavior of the Controls and Configuration services through the whole Run 2 period, putting particular emphasis on robustness, reliability and performance matters
    corecore