40 research outputs found

    Imaging Atmospheric Cherenkov Telescopes pointing determination using the trajectories of the stars in the field of view

    Get PDF
    38th International Cosmic Ray Conference (ICRC2023), 26 July - 3 August, 2023, Nagoya, Japan.Mykhailo Dalchenko and Matthieu Heller on behalf of the CTA-LST Project. Luis del Peral Gochicoa, Jose Julio Lozano Bahilo and Maria Dolores Rodriguez Frias belong to the CTA-LST Project.We present a new approach to the pointing determination of Imaging Atmospheric Cherenkov Telescopes (IACTs). This method is universal and can be applied to any IACT with minor modifications. It uses the trajectories of the stars in the field of view of the IACT’s main camera and requires neither dedicated auxiliary hardware nor a specific data taking mode. The method consists of two parts: firstly, we reconstruct individual star positions as a function of time, taking into account the point spread function of the telescope; secondly, we perform a simultaneous fit of all reconstructed star trajectories using the orthogonal distance regression method. The method does not assume any particular star trajectories, does not require a long integration time, and can be applied to any IACT observation mode. The performance of the method is assessed with commissioning data of the Large-Sized Telescope prototype (LST-1), showing the method’s stability and remarkable pointing performance of the LST-1 telescope

    A Measurement of B Meson Production and Lifetime Using D`− Events in Z0 Decays

    Get PDF
    A study of B meson decays into D l- X final states is presented. In these events, neutral and charged D mesons originate predominantly from B+ and B0 decays, respectively. The dilution of this correlation due to D** production has been taken into account. From 263700 hadronic Z0 decays collected in 1991 with the DELPHI detector at the LEP collider, 92 D0 --> K- pi+, 35 D+ --> K- pi+ pi+ and 61 D*+ --> D0 pi+ followed by D0 --> K- pi+ or D0 --> K- pi+ pi+ pi-, are found with an associated lepton of the same charge as the kaon. From the D0 l- and D*+ l-, the probability f(d) that a b quark hadronizes into a B- (or B0BAR),meson is found to be 0.44 +/-0.08 +/-0.09, corresponding to a total (B(s) + LAMBDA(b)) hadronization fraction of 0.12(-0.12)+0.24 .By reconstructing the energy of each B meson, the b quark fragmentation is directly measured for the first time. The mean value of the B meson energy fraction is: [X(E)(B)] = 0.695+/-0.015(stat.)+/-0.029(syst.) Reconstructing D-lepton vertices, the following B life-times are measured: tau(B) = 1.27(-0.18)+0.22(stat.)+/-0.15(syst.) ps, where bBAR --> D0 l- X, tau(B) = 1.18(-0.27)+0.39(stat.)+/-0.15(syst.) ps, where BBAR --> D+ l- X, T(B) = 1.19(-0.19)+0.25(stat.)+/-0.15(syst.) ps where BBAR --> D*+ l- X, and an average tau(B) = 1.23(-0.13)+0.14(stat.)+/-0.15(syst.) ps is found. Allowing for decays into D** l- vBAR, the B+ and B0 lifetimes are: tau(B+)= 1.30(0.29)+0.33(stat.)+/-0.15(syst. exp.) +/-0.05(syst. D**) ps, tau(B0)= 1.17(-0.23)+0.29(stat.)+/-0.15(syst. exp.) +/-0.05 (syst. D**) ps, tau(B+)/tau(B0) = 1.11(0.39)+0.51(stat.)+/-0.05(syst. exp.) +/-0.10(syst. D**) ps

    Measurement of the triple-gluon vertex from 4-JET events at LEP

    Get PDF
    From the combined data of 1990 and 1991 of the DELPHI experiment at LEP, 13057 4-jet events are obtained and used for determining the contribution of the triple-gluon vertex. The relevant variables are the generalized Nachtmann Reiter angle theta(NR)* and the opening angle of the two least energetic jets. A fit to their two-dimensional distribution yields C(A)/C(F)=2.12+/-0.35 and N(C)/N(A)=0.46+/-0.19, where C(A)/C(F) is the ratio of the coupling strength of the triple-gluon vertex to that of gluon bremsstrahlung from quarks, and N(C)/N(A), the ratio of the number of quark colours to the number of gluons. This constitutes a convincing model-independent proof of the existence of the triple-gluon vertex, since its contribution is directly proportional to C(A)/C(F). The results are in agreement with the values expected from QCD: C(A)/C(F)=2.25, and N(C)/N(A)=3/8

    Estudio del canal electronico de desintegracion del lepton tau en LEP con el detector DELPHI

    Full text link
    This thesis presents a study of the decays τ→eΜΜ‟\tau\rightarrow e\nu\overline{\nu} identified after a selection of τ+τ−\tau^+\tau^- events produced in e+e−e^+e^- collisions at LEP. These collisions, predominantly mediated by a Z0Z^0 boson, offer a good laboratory for the study of weak current parameters. Moreover, the decays of the tau lepton, which proceed via a WW boson, increase the possibilities of our study. The experimental observables, which imply an indirect knowledge of the coupling constants, include the polarization asymmetry and the forward-backward polarization asymmetry. The branching ratio of the electronic decay of the τ\tau lepton (18.24±0.28±0.32)%18.24\pm0.28\pm0.32)\%) derives a measurement of the strong coupling constant αs\alpha_s at the scale of the Z0Z^0 boson mass (0.118−0.006+0.004±0.0060.118^{+0.004}_{-0.006}\pm0.006) and a test of lepton universality in weak currents (gτgÎŒ=1.000±0.014\frac{g_{\tau}}{g_{\mu}}=1.000\pm0.014). The spectra of momenta and electromagnetic energy depositions used as estimates of the energy of the electron allow to measure the polarization asymmetries (Pτ=−0.16±0.09±0.05P_{\tau}=-0.16\pm0.09\pm0.05 and ApolFB=−0.19±0.09±0.01A^{FB}_{pol}=-0.19\pm0.09\pm0.01) giving subsequent values for the effective sin2ΞWsin^2\theta_W (0.230±0.0130.230\pm0.013 and 0.218±0.0150.218\pm0.015).Comment: Doctoral thesis in spanish language. 172 pages including the index, the introduction and the bibliography. A postscript version is available at http://evalu0.ific.uv.es/lozano/phd_e.htm

    An Information Aggregation and Analytics System for ATLAS Frontier

    No full text
    ATLAS event processing requires access to centralized database systems where information about calibrations, detector status and data-taking conditions are stored. This processing is done on more than 150 computing sites on a world-wide computing grid which are able to access the database using the Squid-Frontier system. Some processing workflows have been found which overload the Frontier system due to the Conditions data model currently in use, specifically because some of the Conditions data requests have been found to have a low caching efficiency. The underlying cause is that non-identical requests as far as the caching are actually retrieving a much smaller number of unique payloads. While ATLAS is undertaking an adiabatic transition during the LHC Long Shutdown 2 and Run 3 from the current COOL Conditions data model to a new data model called CREST for Run 4, it is important to identify the problematic Conditions queries with low caching efficiency and work with the detector subsystems to improve the storage of such data within the current data model. For this purpose ATLAS put together an information aggregation and analytics system. The system is based on aggregated data from the Squid-Frontier logs using the Elasticsearch technology. This paper§ describes the components of this analytics system from the server based on Flask/Celery application to the user interface and how we use Spark SQL functionalities to filter data for making plots, storing the caching efficiency results into a Elasticsearch database and finally deploying the package via a Docker container

    LST-1 observations of an enormous flare of BL Lacertae in 2021

    Get PDF
    38th International Cosmic Ray Conference (ICRC2023), 26 July - 3 August, 2023, Nagoya, Japan.Seiya Nozaki, Katsuaki Asano, Juan Escudero, Gabriel Emery and Chaitanya Priyadarshi on behalf of the CTA-LST Project. Luis del Peral Gochicoa, Jose Julio Lozano Bahilo and Maria Dolores Rodriguez Frias belong to the CTA-LST Project.The first prototype of LST (LST-1) for the Cherenkov Telescope Array has been in commissioning phase since 2018 and already started scientific observations with the low energy threshold around a few tens of GeV. In 2021, LST-1 observed BL Lac following the alerts based on multi-wavelength observations and detected prominent gamma-ray flares. In addition to the daily flux variability, LST-1 also detected sub-hour-scale intra-night variability reaching 3–4 times higher than the gamma-ray flux from the Crab Nebula above 100 GeV. In this proceeding, we will report the analysis results of LST-1 observations of BL Lac in 2021, especially focusing on flux variability

    An Information Aggregation and Analytics System for ATLAS Frontier

    Get PDF
    ATLAS event processing requires access to centralized database systems where information about calibrations, detector status and data-taking conditions are stored. This processing is done on more than 150 computing sites on a world-wide computing grid which are able to access the database using the Squid-Frontier system. Some processing workflows have been found which overload the Frontier system due to the Conditions data model currently in use, specifically because some of the Conditions data requests have been found to have a low caching efficiency. The underlying cause is that non-identical requests as far as the caching are actually retrieving a much smaller number of unique payloads. While ATLAS is undertaking an adiabatic transition during the LHC Long Shutdown 2 and Run 3 from the current COOL Conditions data model to a new data model called CREST for Run 4, it is important to identify the problematic Conditions queries with low caching efficiency and work with the detector subsystems to improve the storage of such data within the current data model. For this purpose ATLAS put together an information aggregation and analytics system. The system is based on aggregated data from the Squid-Frontier logs using the Elasticsearch technology. This paper§ describes the components of this analytics system from the server based on Flask/Celery application to the user interface and how we use Spark SQL functionalities to filter data for making plots, storing the caching efficiency results into a Elasticsearch database and finally deploying the package via a Docker container

    Understanding the evolution of conditions data access through Frontier for the ATLAS Experiment

    Get PDF
    The ATLAS Distributed Computing system uses the Frontier system to access the Conditions, Trigger, and Geometry database data stored in the Oracle Offline Database at CERN by means of the HTTP protocol. All ATLAS computing sites use Squid web proxies to cache the data, greatly reducing the load on the Frontier servers and the databases. One feature of the Frontier client is that in the event of failure, it retries with different services. While this allows transient errors and scheduled maintenance to happen transparently, it does open the system up to cascading failures if the load is high enough. Throughout LHC Run 2 there has been an ever increasing demand on the Frontier service. There have been multiple incidents where parts of the service failed due to high load. A significant improvement in the monitoring of the Frontier service wasrequired. The monitoring was needed to identify both problematic tasks, which could then be killed or throttled, and to identify failing site services as the consequence of a cascading failure is much higher. This presentation describes the implementation and features of the monitoring system

    Understanding the evolution of conditions data access through Frontier for the ATLAS Experiment

    No full text
    International audienceThe ATLAS Distributed Computing system uses the Frontier system to access the Conditions, Trigger, and Geometry database data stored in the Oracle Offline Database at CERN by means of the HTTP protocol. All ATLAS computing sites use Squid web proxies to cache the data, greatly reducing the load on the Frontier servers and the databases. One feature of the Frontier client is that in the event of failure, it retries with different services. While this allows transient errors and scheduled maintenance to happen transparently, it does open the system up to cascading failures if the load is high enough. Throughout LHC Run 2 there has been an ever increasing demand on the Frontier service. There have been multiple incidents where parts of the service failed due to high load. A significant improvement in the monitoring of the Frontier service was required. The monitoring was needed to identify both problematic tasks, which could then be killed or throttled, and to identify failing site services as the consequence of a cascading failure is much higher. This presentation describes the implementation and features of the monitoring system
    corecore