126 research outputs found
The ATLAS Metadata Interface
International audienceAMI was chosen as the ATLAS dataset selection interface in July 2006. It is the main interface for searching for ATLAS data using physics metadata criteria. AMI has been implemented as a generic database management framework which allows parallel searching over many catalogues, which may have differing schema. The main features of the web interface will be described; in particular the powerful graphic query builder. The use of XML/XLST technology ensures that all commands can be used either on the web or from a command line interface via a web service. We also describe the overall architecture of ATLAS metadata and the different actors and granularity involved, and the place of AMI within this architecture. We discuss the problems involved in the correlation of metadata of differing granularity, and propose a solution for information mediation
Second-order modelling of variable density turbulent jets : evaluation in the near field region
International audienceThis paper is concerned with a complete second-order model of variable density turbulent Jets. Emphasis is given here on the near-fleld region of the flow where it is found that the influence of the density variations is quite important resulting in a complex behaviour of both the mean and turbulent velocity fields. Particular attention has been paid to the mesh grid and the initial conditions so that quantitative comparison with the experimental data obtained in the study developed in parallel to that one at I.M.S.T. can be made. Results relative to the velocity field only will be reported herein since quite few studies have been focusing so far on the near-field region where the model shows shoncomings which may not be visible when looking at results obtained in the far-field where pseudo-similarity is attained
Deploying the ATLAS Metadata Interface (AMI) stack in a Docker Compose or Kubernetes environment
ATLAS Metadata Interface (AMI) is a generic ecosystem for metadata aggregation, transformation and cataloging. This paper describes how a renewed architecture and integration with modern technologies ease the usage and deployment of a complete AMI stack. It describes how to deploy AMI in a Docker Compose or Kubernetes environment, with a particular emphasis on the registration of existing databases, the addition of more metadata sources, and the generation of high level Web search interfaces using dedicated wizards
Using MQTT and Node-RED to monitor the ATLAS Meta-data Interface (AMI) stack and define metadata aggregation tasks in a pipelined way
ATLAS Metadata Interface (AMI) is a generic ecosystem for metadata aggregation, transformation and cataloging. Each sub-system of the stack has recently been improved in order to acquire messaging/telemetry capabilities. This paper describes the whole stack monitoring with the Message Queuing Telemetry Transport (MQTT) protocol and Node-RED, a tool for wiring together hardware/software devices. Finally, this paper shows how Node-RED is used to graphically define metadata aggregation tasks, in a pipelined way, without introducing any single point of failure
Organization and management of ATLAS offline software releases
ATLAS is one of the largest collaborations ever undertaken in the physical sciences. This paper explains how the software infrastructure is organized to manage collaborative code development by around 300 developers with varying degrees of expertise, situated in 30 different countries. ATLAS offline software currently consists of about 2 million source lines of code contained in 6800 C++ classes, organized in almost 1000 packages. We will describe how releases of the offline ATLAS software are built, validated and subsequently deployed to remote sites. Several software management tools have been used, the majority of which are not ATLAS specific; we will show how they have been integrated
Organization and management of ATLAS software releases
International audienceATLAS is one of the largest collaborations ever undertaken in the physical sciences. This paper explains how the software infrastructure is organized to manage collaborative code development by around 300 developers with varying degrees of expertise, situated in 30 different countries. We will describe how the succeeding releases of the software are built, validated and subsequently deployed to remote sites. Several software management tools have been used, the majority of which are not ATLAS specific; we will show how they have been integrated. ATLAS offline software currently consists of about 2 MSLOC contained in 6800 C++ classes, organized in almost 1000 packages
A step towards a computing grid for the LHC experiments : ATLAS data challenge 1
The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made for the final offline computing environment. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples as a worldwide distributed activity. It should be noted that it was not an option to "run the complete production at CERN" even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. The great challenge of organising and carrying out this large-scale production at a significant number of sites around the world had therefore to be faced. However, the benefits of this are manifold: apart from realising the required computing resources, this exercise created worldwide momentum for ATLAS computing as a whole. This report describes in detail the main steps carried out in DC1 and what has been learned form them as a step towards a computing Grid for the LHC experiments
Housing metadata for the common physicist using a relational database
SAM was developed as a data handling system for Run II at Fermilab. SAM is a collection of services, each described by metadata. The metadata are modeled on a relational database, and implemented in ORACLE. SAM, originally deployed in production for the D0 Run II experiment, has now been also deployed at CDF and is being commissioned at MINOS. This illustrates that the metadata decomposition of its services has a broader applicability than just one experiment. A joint working group on metadata with representatives from ATLAS, BaBar, CDF, CMS, D0, and LHCB in cooperation with EGEE has examined this metadata decomposition in the light of general HEP user requirements. Greater understanding of the required services of a performant data handling system has emerged from Run II experience. This experience is being merged with the understanding being developed in the course of LHC experience with data challenges and user case discussions. We describe the SAM schema and the commonalities of function and service support between this schema and proposals for the LHC experiments. We describe the support structure required for SAM schema updates, the use of development, integration, and production instances. We are also looking at the LHC proposals for the evolution of schema using keyword-value pairs that are then transformed into a normalized, performant database schema
- …