50 research outputs found

    Modelling and Operational Effectiveness Evaluation for Integrated Gun and Missile Anti-aircraft System

    Get PDF
    弹炮结合防空武器系统是随着空袭与反空袭防空作战发展而产生的一种综合防空武器系统,是防空作战武器装备体系的重要组成部分。该系统由中低空地对空导弹和小口径高炮两种防空武器按一定的结合方式组合而成,二者组合可以发挥导弹中高空毁伤率高、高炮低空火力猛、抗干扰能力强等优势,能够在防空作战时取得更好的作战效果。 对防空武器系统的发展论证需要采用定性分析与定量分析相结合的科学方法,寻找防空武器系统作战能力的差距,分析影响防空武器系统效能的关键因素,进而研究防空武器装备建设重点和发展方向及作战使用原则。弹炮结合防空武器系统的研究包括很多方面,本文主要研究弹炮结合防空武器系统的作战模型和效能评估技术。论文分析...With the development of modern air raid and anti-air raid in warfare,integrated anti-aircrafts gun and missile system,as one of important components of air defence equipment system,has become a comprehensive air defence weapon system. This system is made up of two air-defence weapons: mid-low air space from earth to air missile, and small-diameter gun. And these two weapons are integrated in speci...学位:工学硕士院系专业:信息科学与技术学院自动化系_系统工程学号:20033102

    Alignment of the ALICE Inner Tracking System with cosmic-ray tracks

    Get PDF
    37 pages, 15 figures, revised version, accepted by JINSTALICE (A Large Ion Collider Experiment) is the LHC (Large Hadron Collider) experiment devoted to investigating the strongly interacting matter created in nucleus-nucleus collisions at the LHC energies. The ALICE ITS, Inner Tracking System, consists of six cylindrical layers of silicon detectors with three different technologies; in the outward direction: two layers of pixel detectors, two layers each of drift, and strip detectors. The number of parameters to be determined in the spatial alignment of the 2198 sensor modules of the ITS is about 13,000. The target alignment precision is well below 10 micron in some cases (pixels). The sources of alignment information include survey measurements, and the reconstructed tracks from cosmic rays and from proton-proton collisions. The main track-based alignment method uses the Millepede global approach. An iterative local method was developed and used as well. We present the results obtained for the ITS alignment using about 10^5 charged tracks from cosmic rays that have been collected during summer 2008, with the ALICE solenoidal magnet switched off.Peer reviewe

    Transverse momentum spectra of charged particles in proton-proton collisions at s=900\sqrt{s} = 900 GeV with ALICE at the LHC

    Get PDF
    The inclusive charged particle transverse momentum distribution is measured in proton-proton collisions at s=900\sqrt{s} = 900 GeV at the LHC using the ALICE detector. The measurement is performed in the central pseudorapidity region (η<0.8)(|\eta|<0.8) over the transverse momentum range 0.15<pT<100.15<p_{\rm T}<10 GeV/cc. The correlation between transverse momentum and particle multiplicity is also studied. Results are presented for inelastic (INEL) and non-single-diffractive (NSD) events. The average transverse momentum for η<0.8|\eta|<0.8 is <pT>INEL=0.483±0.001\left<p_{\rm T}\right>_{\rm INEL}=0.483\pm0.001 (stat.) ±0.007\pm0.007 (syst.) GeV/cc and \left_{\rm NSD}=0.489\pm0.001 (stat.) ±0.007\pm0.007 (syst.) GeV/cc, respectively. The data exhibit a slightly larger <pT>\left<p_{\rm T}\right> than measurements in wider pseudorapidity intervals. The results are compared to simulations with the Monte Carlo event generators PYTHIA and PHOJET.Comment: 20 pages, 8 figures, 2 tables, published version, figures at http://aliceinfo.cern.ch/ArtSubmission/node/390

    The ALICE experiment at the CERN LHC

    Get PDF
    ALICE (A Large Ion Collider Experiment) is a general-purpose, heavy-ion detector at the CERN LHC which focuses on QCD, the strong-interaction sector of the Standard Model. It is designed to address the physics of strongly interacting matter and the quark-gluon plasma at extreme values of energy density and temperature in nucleus-nucleus collisions. Besides running with Pb ions, the physics programme includes collisions with lighter ions, lower energy running and dedicated proton-nucleus runs. ALICE will also take data with proton beams at the top LHC energy to collect reference data for the heavy-ion programme and to address several QCD topics for which ALICE is complementary to the other LHC detectors. The ALICE detector has been built by a collaboration including currently over 1000 physicists and engineers from 105 Institutes in 30 countries. Its overall dimensions are 161626 m3 with a total weight of approximately 10 000 t. The experiment consists of 18 different detector systems each with its own specific technology choice and design constraints, driven both by the physics requirements and the experimental conditions expected at LHC. The most stringent design constraint is to cope with the extreme particle multiplicity anticipated in central Pb-Pb collisions. The different subsystems were optimized to provide high-momentum resolution as well as excellent Particle Identification (PID) over a broad range in momentum, up to the highest multiplicities predicted for LHC. This will allow for comprehensive studies of hadrons, electrons, muons, and photons produced in the collision of heavy nuclei. Most detector systems are scheduled to be installed and ready for data taking by mid-2008 when the LHC is scheduled to start operation, with the exception of parts of the Photon Spectrometer (PHOS), Transition Radiation Detector (TRD) and Electro Magnetic Calorimeter (EMCal). These detectors will be completed for the high-luminosity ion run expected in 2010. This paper describes in detail the detector components as installed for the first data taking in the summer of 2008

    The ALICE DAQ Online transient data storage system

    No full text
    ALICE is a dedicated heavy-ion detector to exploit the physics potential of nucleus-nucleus (lead-lead) interactions at LHC energies. Running in heavy-ion mode the data rate from event building to permanent storage is expected to be around 1.25 GB/s. To continue data recording even in the event of hardware failure or connection problems, a large disk pool has been installed at the experiment's site as buffering layer between the DAQ and the remote (~5km) tape facility in the CERN Computing Centre. This Transient Data Storage (TDS) disk pool has to provide the bandwidth to be able to simultaneously absorb data from the event building machines and to move data to the tape facility. The aggregated bandwidth of the TDS is expected to exceed 3 GB/s in mixed I/O traffic. Extensive tests have been carried out on various hardware and software solutions with the goal to build a common file space shared by ~60 clients, whilst still providing maximum bandwidth per client (~400MB/s, 4Gbps Fibre Channel), fail-over safety and redundancy. This paper will present the chosen hardware and software solution, the configuration of the TDS pool and the various modes of operation in the ALICE DAQ framework. It will also present the results of the performance tests carried out during the last ALICE Data Challenge

    Online processing in the ALICE DAQ The detector algorithms

    No full text
    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Some specific calibration tasks are performed regularly for each of the 18 ALICE sub-detectors in order to achieve most accurate physics measurements. These procedures involve events analysis in a wide range of experimental conditions, implicating various trigger types, data throughputs, electronics settings, and algorithms, both during short sub-detector standalone runs and long global physics runs. A framework was designed to collect statistics and compute some of the calibration parameters directly online, using resources of the Data Acquisition System (DAQ), and benefiting from its inherent parallel architecture to process events. This system has been used at the experimental area for one year, and includes more than 30 calibration routines in production. This paper describes the framework architecture and the synchronization mechanisms involved at the level of the Experiment Control System (ECS) of ALICE. The software libraries interfacing detector algorithms (DA) to the online data flow, configuration database, experiment logbook, and offline system are reviewed. The test protocols followed to integrate and validate each sub-detector component are also discussed, including the automatic build system and validation procedures used to ensure a smooth deployment. The offline post-processing and archiving of the DA results is covered in a separate paper

    The ALICE data quality monitoring

    No full text
    ALICE is one of the four experiments installed at the CERN Large Hadron Collider (LHC), especially designed for the study of heavy-ion collisions. The online Data Quality Monitoring (DQM) is an important part of the data acquisition (DAQ) software. It involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper presents the final design, as well as the latest and coming features, of the ALICE's specific DQM software called AMORE (Automatic MonitoRing Environment). It describes the challenges we faced during its implementation, including the performances issues, and how we tested and handled them, in particular by using a scalable and robust publish-subscribe architecture.We also review the on-going and increasing adoption of this tool amongst the ALICE collaboration and the measures taken to develop, in synergy with their respective teams, efficient monitoring modules for the sub-detectors. The related packaging and release procedure needed by such a distributed framework is also described. We finally overview the wide range of usages people make of this framework, and we review our own experience, before and during the LHC start-up, when monitoring the data quality on both the sub-detectors and the DAQ side in a real-world and challenging environmen

    The ALICE online data storage system

    No full text
    The ALICE (A Large Ion Collider Experiment) Data Acquisition (DAQ) system has the unprecedented requirement to ensure a very high volume, sustained data stream between the ALICE Detector and the Permanent Data Storage (PDS) system which is used as main data repository for Event processing and Offline Computing. The key component to accomplish this task is the Transient Data Storage System (TDS), a set of data storage elements with its associated hardware and software components, which supports raw data collection, its conversion into a format suitable for subsequent high-level analysis, the storage of the result using highly parallelized architectures, its access via a cluster file system capable of creating high-speed partitions via its affinity feature, and its transfer to the final destination via dedicated data links. We describe the methods and the components used to validate, test, implement, operate, and monitor the ALICE Online Data Storage system and the way it has been used in the early days of commissioning and operation for the ALICE Detector. We will also introduce the future developments needed from next year, when the ALICE Data Acquisition System will shift its requirements from those associated to the test and commissioning phase to those imposed by long-duration data taking periods alternated by shorter validation and maintenance tasks which will be needed to adequately operate the ALICE Experiment

    Commissioning of the ALICE data acquisition system

    No full text
    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A flexible, large bandwidth Data Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time foreseen per year for heavy ions and to accommodate very different requirements originated from the 18 sub-detectors. The Data Acquisition and Test Environment (DATE) is the software framework handling the data from the detector electronics up to the mass storage. This paper reviews the DAQ software and hardware architecture, including the latest features of the final design, such as the handling of the numerous calibration procedures in a common framework. We also discuss the large scale tests conducted on the real hardware to assess the standalone DAQ performances, its interfaces with the other online systems and the extensive commissioning performed in order to be ready for cosmics data taking scheduled to start in November 2007. The test protocols followed to integrate and validate each sub-detector with DAQ and Trigger hardware synchronized by the Experiment Control System are described. Finally, we give an overview of the experiment logbook, and some operational aspects of the deployment of our computing facilities. The implementation of a Transient Data Storage able to cope with the 1.25 GB/s recorded by the event-building machines and the data quality monitoring framework are covered in separate papers
    corecore