1,262 research outputs found

    A note on comonotonicity and positivity of the control components of decoupled quadratic FBSDE

    Get PDF
    In this small note we are concerned with the solution of Forward-Backward Stochastic Differential Equations (FBSDE) with drivers that grow quadratically in the control component (quadratic growth FBSDE or qgFBSDE). The main theorem is a comparison result that allows comparing componentwise the signs of the control processes of two different qgFBSDE. As a byproduct one obtains conditions that allow establishing the positivity of the control process.Comment: accepted for publicatio

    Measurement of the muon flux in the tunnels of Doss Trento hill

    Get PDF
    In the context of astroparticle physics, nuclear astrophysics and quantum computing projects, it is important identifying underground laboratories where the cosmogenic background is suppressed. Located about 500 m far from the center of Trento (Italy) the Piedicastello tunnels are covered by 100 m limestone rock of the Doss Trento hill. The site exceeds 6000m2 surface and is currently hosting events, temporary exhibitions, and educational activities. The cosmogenic background was measured in different locations within the Piedicastello tunnels with three portable scintillator telescopes having different geometrical acceptances. The muon flux measured in the deepest part was found to be about two orders of magnitude lower than the surface flux. This preliminary measurement suggests the use of the site as a facility in which a low environmental background is required

    Long-range angular correlations on the near and away side in p–Pb collisions at

    Get PDF

    Event-shape engineering for inclusive spectra and elliptic flow in Pb-Pb collisions at root(NN)-N-S=2.76 TeV

    Get PDF
    Peer reviewe

    Production of He-4 and (4) in Pb-Pb collisions at root(NN)-N-S=2.76 TeV at the LHC

    Get PDF
    Results on the production of He-4 and (4) nuclei in Pb-Pb collisions at root(NN)-N-S = 2.76 TeV in the rapidity range vertical bar y vertical bar <1, using the ALICE detector, are presented in this paper. The rapidity densities corresponding to 0-10% central events are found to be dN/dy4(He) = (0.8 +/- 0.4 (stat) +/- 0.3 (syst)) x 10(-6) and dN/dy4 = (1.1 +/- 0.4 (stat) +/- 0.2 (syst)) x 10(-6), respectively. This is in agreement with the statistical thermal model expectation assuming the same chemical freeze-out temperature (T-chem = 156 MeV) as for light hadrons. The measured ratio of (4)/He-4 is 1.4 +/- 0.8 (stat) +/- 0.5 (syst). (C) 2018 Published by Elsevier B.V.Peer reviewe

    Underlying Event measurements in pp collisions at s=0.9 \sqrt {s} = 0.9 and 7 TeV with the ALICE experiment at the LHC

    Full text link

    Service Asset and Configuration Management in ALICE Detector Control System

    No full text
    ALICE (A Large Ion Collider Experiment) is one of the big LHC (Large Hadron Collider) detectors at CERN. It is composed of 19 sub-detectors constructed by different institutes participating in the project. Each of these subsystems has a dedicated control system based on the commercial SCADA package "WinCC Open Architecture" and numerous other software and hardware components delivered by external vendors. The task of the central controls coordination team is to supervise integration, to provide shared services (e.g. database, gas monitoring, safety systems) and to manage the complex infrastructure (including over 1200 network devices and 270 VME and power supply crates) that is used by over 100 developers around the world. Due to the scale of the control system, it is essential to ensure that reliable and accurate information about all the components - required to deliver these services along with relationship between the assets - is properly stored and controlled. In this paper we will present the techniques and tools that were implemented to achieve this goal, together with experience gained from their use and plans for their improvement

    The Evolution of the ALICE Detector Control System

    No full text
    The ALICE Detector Control System has provided its service since 2007. Its operation in the past years proved that the initial design of the system fulfilled all expectations and allowed the evolution of the detectors and operational requirements to follow. In order to minimize the impact of the human factor, many procedures have been optimized and new tools have been introduced in order to allow the operator to supervise about 1 000 000 parameters from a single console. In parallel with the preparation for new runs after the LHC shutdown a prototyping for system extensions which shall be ready in 2018 has started. New detectors will require new approaches to their control and configuration. The conditions data, currently collected after each run, will be provided continuously to a farm containing 100 000 CPU cores and tens of PB of storage. In this paper the DCS design, deployed technologies, and experience gained during the 7 years of operation will be described and the initial assumptions with the current setup will be compared. The current status of the developments for the upgraded system, which will be put into operation in less than 3 years from now, will also be described

    ADAPOS: An architecture for publishing ALICE DCS conditions data

    No full text
    ALICE Data Point Service (ADAPOS) is a software architecture being developed for the RUN3 period of LHC, as a part of the effort to transmit conditions data from ALICE Detector Control System (DCS) to Event Processing Network (EPN), for distributed processing. The key processes of ADAPOS, Engine and Terminal, run on separate machines, facing different networks. Devices connected to DCS publish their state as DIM services. Engine gets updates to the services, and converts them into a binary stream. Terminal receives it over 0MQ, and maintains an image of the DCS state. It sends copies of the image, at regular intervals, over another 0MQ connection, to a readout process of ALICE Data Acquisition.ALICE Data Point Service (ADAPOS) is a software architecture being developed for the Run 3 period of LHC, as a part of the effort to transmit conditions data from ALICE Detector Control System (DCS) to GRID, for distributed processing. ADAPOS uses Distributed Information Management (DIM), 0MQ, and ALICE Data Point Processing Framework (ADAPRO). DIM and 0MQ are multi-purpose application-level network protocols. DIM and ADAPRO are being developed and maintained at CERN. ADAPRO is a multi-threaded application framework, supporting remote control, and also real-time features, such as thread affinities, records aligned with cache line boundaries, and memory locking. ADAPOS and ADAPRO are written in C++14 using OSS tools, Pthreads, and Linux API. The key processes of ADAPOS, Engine and Terminal, run on separate machines, facing different networks. Devices connected to DCS publish their state as DIM services. Engine gets updates to the services, and converts them into a binary stream. Terminal receives it over 0MQ, and maintains an image of the DCS state. It sends copies of the image, at regular intervals, over another 0MQ connection, to a readout process of ALICE Data Acquisition
    corecore