4,065 research outputs found

    Track Reconstruction Performance in CMS

    Full text link
    The expected performance of track reconstruction with LHC events using the CMS silicon tracker is presented. Track finding and fitting is accomplished with Kalman Filter techniques that achieve efficiencies above 99% on single muons with pT>1 GeV/c. Difficulties arise in the context of standard LHC events with a high density of charged particles, where the rate of fake combinatorial tracks is very large for low pT tracks, and nuclear interactions in the tracker material reduce the tracking efficiency for charged hadrons. Recent improvements with the CMS track reconstruction now allow to efficiently reconstruct charged tracks with pT down to few hundred MeV/c and as few as three crossed layers, with a very small fake fraction, by making use of an optimal rejection of fake tracks in conjunction with an iterative tracking procedure.Comment: 4 pages, 3 figures, proceedings of the 11th Topical Seminar on Innovative Particle and Radiation Detectors (IPRD08

    Kalman Filter Track Fits and Track Breakpoint Analysis

    Get PDF
    We give an overview of track fitting using the Kalman filter method in the NOMAD detector at CERN, and emphasize how the wealth of by-product information can be used to analyze track breakpoints (discontinuities in track parameters caused by scattering, decay, etc.). After reviewing how this information has been previously exploited by others, we describe extensions which add power to breakpoint detection and characterization. We show how complete fits to the entire track, with breakpoint parameters added, can be easily obtained from the information from unbroken fits. Tests inspired by the Fisher F-test can then be used to judge breakpoints. Signed quantities (such as change in momentum at the breakpoint) can supplement unsigned quantities such as the various chisquares. We illustrate the method with electrons from real data, and with Monte Carlo simulations of pion decays.Comment: 27 pages including 10 figures. To appear in NI

    Note on the Origin of the Highest Energy Cosmic Rays

    Get PDF
    In this note we argue that the galactic model chosen by E.-J. Ahn, G. Medina-Tanco, P.L. Bierman and T. Stanev in their paper discussing the origin of the highest energy cosmic rays, is alone responsible for the focussing of positive particles towards the North galactic pole. We discuss the validity of this model, in particular in terms of field reversals and radial extensions. We conclude that with such a model one cannot retreive any directional information from the observed direction of the cosmic rays. In particular one cannot identify point sources at least up to energies of about 200 EeV. Therefore the apparent clustering of the back-traced highest energy cosmic rays observed to date cannot be interpreted as an evidence for a point source nor for the identification of M87, which happens to be close to the North pole, as being such a source.Comment: 3 pages, 2 figure

    Evaluation of the Primary Energy of UHE Photon-induced Atmospheric Showers from Ground Array Measurements

    Get PDF
    A photon induced shower at Eprim1018E_{prim}\ge 10^{18} eV exhibits very specific features and is different from a hadronic one. At such energies, the LPM effect delays in average the first interactions of the photon in the atmosphere and hence slows down the whole shower development. They also have a smaller muonic content than hadronic ones. The response of a surface detector such as that of the Auger Observatory to these specific showers is thus different and has to be accounted for in order to enable potential photon candidates reconstruction correctly. The energy reconstruction in particular has to be adapted to the late development of photon showers. We propose in this article a method for the reconstruction of the energy of photon showers with a surface detector. The key feature of this method is to rely explicitly on the development stage of the shower. This approach leads to very satisfactory results (20\simeq 20%). At even higher energies (5.10195.10^{19} eV and above) the probability for the photon to convert into a pair of e+^+e^- in the geomagnetic field becomes non negligible and requires a different function to evaluate the energy with the proposed method. We propose several approaches to deal with this issue in the scope of the establishment of an upper bound on the photon fraction in UHECR.Comment: 10 page

    The Concurrent Track Evolution Algorithm: Extension for Track Finding in the Inhomogeneous Magnetic Field of the HERA-B Spectrometer

    Get PDF
    The Concurrent Track Evolution method, which was introduced in a previous paper (DESY 97-054), has been further explored by applying it to the propagation of track candidates into an inhomogeneous magnetic field volume equipped with tracking detectors, as is typical for forward B spectrometers like HERA-B or LHCb. Compared to the field-free case, the method was extended to three-dimensional propagation, with special measures necessary to achieve fast transport in the presence of a fast-varying magnetic field. The performance of the method is tested on HERA-B Monte Carlo events with full detector simulation and a realistic spectrometer geometry.Comment: 26 pages (Latex), 11 figures (Postscript

    The DELPHI Silicon Tracker in the global pattern recognition

    Full text link
    ALEPH and DELPHI were the first experiments operating a silicon vertex detector at LEP. During the past 10 years of data taking the DELPHI Silicon Tracker was upgraded three times to follow the different tracking requirements for LEP 1 and LEP 2 as well as to improve the tracking performance. Several steps in the development of the pattern recognition software were done in order to understand and fully exploit the silicon tracker information. This article gives an overview of the final algorithms and concepts of the track reconstruction using the Silicon Tracker in DELPHI.Comment: Talk given at the 8th International Workshop on Vertex Detectors, Vertex'99, Texel, Nederland

    Current Performance of the SLD VXD3

    Get PDF
    During 1996, the SLD collaboration completed construction and began operation of a new charge-coupled device (CCD) vertex detector (VXD3). Since then, its performance has been studied in detail and a new topological vertexing technique has been developed. In this paper, we discuss the design of VXD3, procedures for aligning it, and the tracking and vertexing improvements that have led to its world-record performance.Comment: 17 pages latex including 10 figures, to appear in Proceedings Vertex99 Worksho

    Layered water Cherenkov detector for the study of ultra high energy cosmic rays

    Full text link
    We present a new design for the water Cherenkov detectors that are in use in various cosmic ray observatories. This novel design can provide a significant improvement in the independent measurement of the muonic and electromagnetic component of extensive air showers. From such multi-component data an event by event classification of the primary cosmic ray mass becomes possible. According to popular hadronic interaction models, such as EPOS-LHC or QGSJetII-04, the discriminating power between iron and hydrogen primaries reaches Fisher values of \sim 2 or above for energies in excess of 101910^{19} eV with a detector array layout similar to that of the Pierre Auger Observatory.Comment: 17 pages, 15 figures, submitted to Nuclear Instruments and Methods

    A coverage independent method to analyze large scale anisotropies

    Full text link
    The arrival time distribution of cosmic ray events is well suited to extract information regarding sky anisotropies. For an experiment with nearly constant exposure, the frequency resolution one can achieve is given by the inverse of the time TT during which the data was recorded. For TT larger than one calendar year the resolution becomes sufficient to resolve the sidereal and diurnal frequencies. Using a Fourier expansion on a modified time parameter, we show in this note that one can accurately extract sidereal modulations without knowledge of the experimental coverage. This procedure also gives the full frequency pattern of the event sample under studies which contains important information about possible systematics entering in the sidereal analysis. We also show how this method allows to correct for those systematics. Finally, we show that a two dimensional analysis, in the form of the spherical harmonic (YlmY_l^m) decomposition, can be performed under the same conditions for all m0m\ne 0.Comment: 8 pages, 6 figure

    Dethinning Extensive Air Shower Simulations

    Full text link
    We describe a method for restoring information lost during statistical thinning in extensive air shower simulations. By converting weighted particles from thinned simulations to swarms of particles with similar characteristics, we obtain a result that is essentially identical to the thinned shower, and which is very similar to non-thinned simulations of showers. We call this method dethinning. Using non-thinned showers on a large scale is impossible because of unrealistic CPU time requirements, but with thinned showers that have been dethinned, it is possible to carry out large-scale simulation studies of the detector response for ultra-high energy cosmic ray surface arrays. The dethinning method is described in detail and comparisons are presented with parent thinned showers and with non-thinned showers
    corecore