3,384 research outputs found
Recommended from our members
How the Open University library uses Facebook Live to reach, engage and support students
The Open Universityâs Library Services (http://www.open.ac.uk/library/) has a specialist team of 7 librarians, the Live Engagement team, who design and deliver real-time online teaching using Adobe Connect. There is a programme of generic library and module-specific sessions. Whilst we reach a lot of students through these sessions, we like to try out new ways of engaging with students and hope to reach those who donât attend formal library training
Software Management in the LHCb Online System
LHCb has a large online IT infrastructure with thousands of servers and embedded systems, network routers and switches, databases and storage appliances. These systems run a large number of different applications on various operating systems. The dominant operating systems are Linux and MS-Windows. This large heterogeneous environment, operated by a small number of administrators, requires that new software or updates can be pushed quickly, reliably and as automated as possible. We present here the general design of LHCb's software management along with the main tools: LinuxFC / Quattor and Microsoft SMS, how they have been adapted and integrated and discuss experiences and problems
Control and Monitoring of the Online Computer Farm for Offline Processing in LHCb
ISBN 978-3-95450-139-7 - http://accelconf.web.cern.ch/AccelConf/ICALEPCS2013/papers/tuppc063.pdfInternational audienceLHCb, one of the 4 experiments at the LHC accelerator at CERN, uses approximately 1500 PCs (averaging 12 cores each) for processing the High Level Trigger (HLT) during physics data taking. During periods when data acquisition is not required most of these PCs are idle. In these periods it is possible to profit from the unused processing capacity to run offline jobs, such as Monte Carlo simulation. The LHCb offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control). In LHCbDIRAC, job agents are started on Worker Nodes, pull waiting tasks from the central WMS (Workload Management System) and process them on the available resources. A Control System was developed which is able to launch, control and monitor the job agents for the offline data processing on the HLT Farm. This control system is based on the existing Online System Control infrastructure, the PVSS SCADA and the FSM toolkit. It has been extensively used launching and monitoring 22.000+ agents simultaneously and more than 850.000 jobs have already been processed in the HLT Farm. This paper describes the deployment and experience with the Control System in the LHCb experiment
HEP Applications Evaluation of the EDG Testbed and Middleware
Workpackage 8 of the European Datagrid project was formed in January 2001
with representatives from the four LHC experiments, and with experiment
independent people from five of the six main EDG partners. In September 2002
WP8 was strengthened by the addition of effort from BaBar and D0. The original
mandate of WP8 was, following the definition of short- and long-term
requirements, to port experiment software to the EDG middleware and testbed
environment. A major additional activity has been testing the basic
functionality and performance of this environment. This paper reviews
experiences and evaluations in the areas of job submission, data management,
mass storage handling, information systems and monitoring. It also comments on
the problems of remote debugging, the portability of code, and scaling problems
with increasing numbers of jobs, sites and nodes. Reference is made to the
pioneeering work of Atlas and CMS in integrating the use of the EDG Testbed
into their data challenges. A forward look is made to essential software
developments within EDG and to the necessary cooperation between EDG and LCG
for the LCG prototype due in mid 2003.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
Conference (CHEP03), La Jolla, CA, USA, March 2003, 7 pages. PSN THCT00
The migration to a standardized architecture for developing systems on the Glance project
The Glance project is responsible for over 20 systems across three CERN experiments: ALICE [1], ATLAS [2] and LHCb [3]. Students, engineers, physicists and technicians have been using systems designed and managed by Glance on a daily basis for 20 years. In order to produce quality products continuously, considering internal stakeholderâs ever-evolving requests, there is a need for standardization. The adoption of such a standard had to take into account not only future developments but also legacy systems of the three experiments. These systems were created using an in-house built framework, and, as they scaled, became difficult to maintain due to the frameworkâs lack of documentation and use of technologies that were becoming obsolete. Migrating them to a new architecture would mean speeding up the development process, avoiding rework and integrating CERN systems widely. Since a lot of the core functionalities of the systems are shared between them, both on the frontend and on the backend, the architecture had to assure modularity and reusability. In this architecture, the principles behind Hexagonal Architecture are followed and the systemsâ codebase is split into two applications: a JavaScript client and a REST backend server. The open-source framework Vue.js was chosen for the frontend. Its versatility, approachability and extended documentation made it the ideal tool for creating components that are reused throughout Glance applications. The backend uses PHP libraries created by the team to expose information through REST APIs both internally, allowing easier integration between the systems, and externally, introducing to users outside Glance information managed by the team
Glance Search Library
The LHCb experiment is one of the 4 large LHC experiments at CERN. With more than 1500 members and tens of thousands of assets, the Collaboration requires systems that allow the extraction of data from many databases according to some very specific criteria. In LHCb there are 4 production web applications responsible for managing members and institutes, tracking assets and their current status, presenting radiological information about the cavern, and supporting the management of cables. A common requirement shared across all these systems is to allow searching information based on logical expressions. Therefore, in order to avoid rework, the Glance Search Library was created with the goal of providing components for applications to deploy frontend search interfaces capable of generating standardized queries based on usersâ input, and backend utility functions that compile such queries into a SQL clause. The Glance Search Library is split into 2 smaller libraries maintained in different GitLab repositories. The first one only contains Vue components and JavaScript modules and, in LHCb, it is included as a dependency of the SPAs (Single Page Applications). The second is a PHP Object-Oriented library, mainly used by REST APIs that are required to expose large amounts of data stored in their relational databases. This separation provides greater flexibility and more agile deployments. It also enables lighter applications with no graphical interface to build command line tools solely on top of the backend classes and predefined queries
Differential branching fraction and angular analysis of the decay B0âKâ0ÎŒ+ÎŒâ
The angular distribution and differential branching fraction of the decay B 0â K â0 ÎŒ + ÎŒ â are studied using a data sample, collected by the LHCb experiment in pp collisions at sâ=7 TeV, corresponding to an integrated luminosity of 1.0 fbâ1. Several angular observables are measured in bins of the dimuon invariant mass squared, q 2. A first measurement of the zero-crossing point of the forward-backward asymmetry of the dimuon system is also presented. The zero-crossing point is measured to be q20=4.9±0.9GeV2/c4 , where the uncertainty is the sum of statistical and systematic uncertainties. The results are consistent with the Standard Model predictions
Opposite-side flavour tagging of B mesons at the LHCb experiment
The calibration and performance of the oppositeside
flavour tagging algorithms used for the measurements
of time-dependent asymmetries at the LHCb experiment
are described. The algorithms have been developed using
simulated events and optimized and calibrated with
B
+ âJ/ÏK
+, B0 âJ/ÏK
â0 and B0 âD
ââ
Ό
+
ΜΌ decay
modes with 0.37 fbâ1 of data collected in pp collisions
at
â
s = 7 TeV during the 2011 physics run. The oppositeside
tagging power is determined in the B
+ â J/ÏK
+
channel to be (2.10 ± 0.08 ± 0.24) %, where the first uncertainty
is statistical and the second is systematic
- âŠ