50 research outputs found
A note on comonotonicity and positivity of the control components of decoupled quadratic FBSDE
In this small note we are concerned with the solution of Forward-Backward
Stochastic Differential Equations (FBSDE) with drivers that grow quadratically
in the control component (quadratic growth FBSDE or qgFBSDE). The main theorem
is a comparison result that allows comparing componentwise the signs of the
control processes of two different qgFBSDE. As a byproduct one obtains
conditions that allow establishing the positivity of the control process.Comment: accepted for publicatio
Transverse sphericity of primary charged particles in minimum bias proton–proton collisions at √s = 0.9, 2.76 and 7 TeV
Measurements of the sphericity of primary charged particles in minimum bias proton–proton collisions at s√=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is measured in the plane perpendicular to the beam direction using primary charged tracks with p T>0.5 GeV/c in |η|<0.8. The mean sphericity as a function of the charged particle multiplicity at mid-rapidity (N ch) is reported for events with different p T scales (“soft” and “hard”) defined by the transverse momentum of the leading particle. In addition, the mean charged particle transverse momentum versus multiplicity is presented for the different event classes, and the sphericity distributions in bins of multiplicity are presented. The data are compared with calculations of standard Monte Carlo event generators. The transverse sphericity is found to grow with multiplicity at all collision energies, with a steeper rise at low N ch, whereas the event generators show an opposite tendency. The combined study of the sphericity and the mean p T with multiplicity indicates that most of the tested event generators produce events with higher multiplicity by generating more back-to-back jets resulting in decreased sphericity (and isotropy). The PYTHIA6 generator with tune PERUGIA-2011 exhibits a noticeable improvement in describing the data, compared to the other tested generators
CE in corporate sustainability reporting – general insights based on Polish companies listed on the Warsaw Stock Exchange
The circular economy (CE) represents a new paradigm in management. The introduction of the Corporate Sustainability Reporting Directive (CSRD) and the accompanying delegated acts (European Sustainability Reporting Standards, ESRS) highlight that CE is also an integral part of legal requirements at the European Union level. The primary objective of this study is to assess the CE indicators that are covered in the ESG reports of Polish companies listed on the Warsaw Stock Exchange. In the present study, the sustainability strategies of selected companies were also reviewed in terms of CE objectives, policies and action programmes. The findings suggest that despite a growing trend towards circularity in the economy, analysed companies do not demonstrate impressive consistency and standardisation. The analysis concluded that some enterprises approach circularity comprehensively and thoughtfully, implementing actual actions and investment expenditures. In contrast, others merely mention CE by proposing a single, selective indicator whose definition is unclear. Within the framework of this article, groups of indicators were proposed that could serve in analysing dual materiality, one of which is the indicator for reducing material and energy costs
Zastosowanie języka XVCL do budowy repozytorium diagramów klas
The paper describes the idea of class diagram evolution management with attribute-driven versioning and XVCL language – the XML dialect for variant description and generative programming. The proposed repository is designed as a three layer hierarchy of XVCL frames. This universal structure can also be used to manage changes of other project artifacts.Artykuł opisuje koncepcję zarządzania ewolucją diagramów klas z wykorzystaniem wersjowania opartego na atrybutach języka XCVL – dialektu XML, służącego do zarządzania wariantami i programowania generatywnego. Repozytorium zbudowano jako trójwarstwową hierarchię ramek XVCL, tworzącą dość uniwersalną strukturę, która może również zostać użyta do zarządzania zmiennością innych artefaktów projektowych
The Evolution of the ALICE Detector Control System
The ALICE Detector Control System has provided its service since 2007. Its operation in the past years proved that the initial design of the system fulfilled all expectations and allowed the evolution of the detectors and operational requirements to follow. In order to minimize the impact of the human factor, many procedures have been optimized and new tools have been introduced in order to allow the operator to supervise about 1 000 000 parameters from a single console. In parallel with the preparation for new runs after the LHC shutdown a prototyping for system extensions which shall be ready in 2018 has started. New detectors will require new approaches to their control and configuration. The conditions data, currently collected after each run, will be provided continuously to a farm containing 100 000 CPU cores and tens of PB of storage. In this paper the DCS design, deployed technologies, and experience gained during the 7 years of operation will be described and the initial assumptions with the current setup will be compared. The current status of the developments for the upgraded system, which will be put into operation in less than 3 years from now, will also be described
Service Asset and Configuration Management in ALICE Detector Control System
ALICE (A Large Ion Collider Experiment) is one of the big LHC (Large Hadron Collider) detectors at CERN. It is composed of 19 sub-detectors constructed by different institutes participating in the project. Each of these subsystems has a dedicated control system based on the commercial SCADA package "WinCC Open Architecture" and numerous other software and hardware components delivered by external vendors. The task of the central controls coordination team is to supervise integration, to provide shared services (e.g. database, gas monitoring, safety systems) and to manage the complex infrastructure (including over 1200 network devices and 270 VME and power supply crates) that is used by over 100 developers around the world. Due to the scale of the control system, it is essential to ensure that reliable and accurate information about all the components - required to deliver these services along with relationship between the assets - is properly stored and controlled. In this paper we will present the techniques and tools that were implemented to achieve this goal, together with experience gained from their use and plans for their improvement
ADAPOS: An architecture for publishing ALICE DCS conditions data
ALICE Data Point Service (ADAPOS) is a software architecture being developed for the RUN3 period of LHC, as a part of the effort to transmit conditions data from ALICE Detector Control System (DCS) to Event Processing Network (EPN), for distributed processing. The key processes of ADAPOS, Engine and Terminal, run on separate machines, facing different networks. Devices connected to DCS publish their state as DIM services. Engine gets updates to the services, and converts them into a binary stream. Terminal receives it over 0MQ, and maintains an image of the DCS state. It sends copies of the image, at regular intervals, over another 0MQ connection, to a readout process of ALICE Data Acquisition.ALICE Data Point Service (ADAPOS) is a software architecture being developed for the Run 3 period of LHC, as a part of the effort to transmit conditions data from ALICE Detector Control System (DCS) to GRID, for distributed processing. ADAPOS uses Distributed Information Management (DIM), 0MQ, and ALICE Data Point Processing Framework (ADAPRO). DIM and 0MQ are multi-purpose application-level network protocols. DIM and ADAPRO are being developed and maintained at CERN. ADAPRO is a multi-threaded application framework, supporting remote control, and also real-time features, such as thread affinities, records aligned with cache line boundaries, and memory locking. ADAPOS and ADAPRO are written in C++14 using OSS tools, Pthreads, and Linux API. The key processes of ADAPOS, Engine and Terminal, run on separate machines, facing different networks. Devices connected to DCS publish their state as DIM services. Engine gets updates to the services, and converts them into a binary stream. Terminal receives it over 0MQ, and maintains an image of the DCS state. It sends copies of the image, at regular intervals, over another 0MQ connection, to a readout process of ALICE Data Acquisition
Visualization Tools to Monitor Structure and Growth of an Existing Control System
The ALICE experiment at the LHC has already been in operation for 15 years, and during its life several detectors have been replaced, new instruments installed, and some technologies changed. The control system has therefore also had to adapt, evolve and expand, sometimes departing from the symmetry and compactness of the original design. In a large collaboration, different groups contribute to the development of the control system of their detector. For the central coordination it is important to maintain the overview of the integrated control system to assure its coherence. Tools to visualize the structure and other critical aspects of the system can be of great help and can highlight problems or features of the control system such as deviations from the agreed architecture. This paper will present that existing tools, such as graphical widgets available in the public domain, or techniques typical of scientific analysis, can be adapted and help assess the coherence of the development, revealing any weaknesses and highlighting the interdependence of parts of the system. We show how we have used some of these techniques to analyse the coherence of the ALICE control system, and how this contributed to pointing out criticalities and key points
Challenges of the ALICE Detector Control System for the LHC RUN3
The ALICE Detector Control System (DCS) has provided its services to the experiment since 10 years. During this period it ensured uninterrupted operation of the experiment and guaranteed stable conditions for the data taking. The DCS has been designed to cope with the detector requirements compatible with the LHC operation during its RUN1 and RUN2 phases. The decision to extend the lifetime of the experiment beyond this horizon requires the redesign of the DCS data flow and represents a major challenge. The major challenges of the system upgrade are presented in this paper.The ALICE Detector Control System (DCS) provides its services to the experiment for 10 years. It ensures uninterrupted operation of the experiment and guarantees stable conditions for the data taking. The decision to extend the lifetime of the experiment requires the redesign of the DCS data flow. The interaction rates of the LHC in ALICE during the RUN3 period will increase by a factor of 100. The detector readout will be upgraded and it will provide 3.4TBytes/s of data, carried by 10 000 optical links to a first level processing farm consisting of 1 500 computer nodes and ~100 000 CPU cores. A compressed volume of 20GByte/s will be transferred to the computing GRID facilities. The detector conditions, consisting of about 100 000 parameters, acquired by the DCS need to be merged with the primary data stream and transmitted to the first level farm every 50ms. This requirement results in an increase of the DCS data publishing rate by a factor of 5000. The new system does not allow for any DCS downtime during the data taking, nor for data retrofitting. Redundancy, proactive monitoring, and improved quality checking must therefore complement the data flow redesign
