15 research outputs found

    Time projection chambers for the T2K near detectors

    Get PDF
    The T2K experiment is designed to study neutrino oscillation properties by directing a high intensity neutrino beam produced at J-PARC in Tokai, Japan, towards the large Super-Kamiokande detector located 295 km away, in Kamioka, Japan. The experiment includes a sophisticated near detector complex, 280 m downstream of the neutrino production target in order to measure the properties of the neutrino beam and to better understand neutrino interactions at the energy scale below a few GeV. A key element of the near detectors is the ND280 tracker, consisting of two active scintillator-bar target systems surrounded by three large time projection chambers (TPCs) for charged particle tracking. The data collected with the tracker is used to study charged current neutrino interaction rates and kinematics prior to oscillation, in order to reduce uncertainties in the oscillation measurements by the far detector. The tracker is surrounded by the former UA1/Nomad dipole magnet and the TPCs measure the charges, momenta, and particle types of charged particles passing through them. Novel features of the TPC design include its rectangular box layout constructed from composite panels, the use of bulk micromegas detectors for gas amplification, electronics readout based on a new ASIC, and a photoelectron calibration system. This paper describes the design and construction of the TPCs, the micromegas modules, the readout electronics, the gas handling system, and shows the performance of the TPCs as deduced from measurements with particle beams, cosmic rays, and the calibration system

    The Physics of the B Factories

    Get PDF
    This work is on the Physics of the B Factories. Part A of this book contains a brief description of the SLAC and KEK B Factories as well as their detectors, BaBar and Belle, and data taking related issues. Part B discusses tools and methods used by the experiments in order to obtain results. The results themselves can be found in Part C

    The Physics of the B Factories

    Get PDF

    Does capping social security harm health? A natural experiment in the UK

    No full text
    In this paper, we examine the mental health effects of lowering the UK's benefit cap in 2016. This policy limits the total amount a household with no-one in full-time employment can receive in social security. We treat the reduction in the cap as a natural policy experiment, comparing those at risk of being capped and those who were not, and examining the risk of experiencing poor mental health both before and after the cap was lowered. Drawing on data from ~900,000 individuals, we find that the prevalence of depression or anxiety among those at risk of being capped increased by 2.6 percentage points (95% confidence interval: 1.33–3.88) compared with those at a low risk of being capped. Capping social security may increase the risk of mental ill health and could have the unintended consequence of pushing out-of-work people even further away from the labour market

    Repoman: A simple RESTful X.509 virtual machine image repository

    No full text
    With broader use of IaaS science clouds the management of multiple Virtual Machine (VM) images is becoming increasingly daunting for the user. In a typical workflow, users work on a prototype VM, clone it and upload it in preparation for building a virtual cluster of identical instances. We describe and benchmark a novel VM image repository (Repoman), which can be used to clone, update, manage, store and distribute VM images to multiple clouds. Users authenticate using X.509 grid proxy certificates to authenticate against Repoman's simple REST API. The lightweight Repoman CLI client tool has minimal python dependencies and can be installed in seconds using standard Python tools. We show that Repoman removes the burden of image management from users while simplifying the deployment of user specific virtual machines.Peer reviewed: YesNRC publication: Ye

    Simulation and user analysis of BaBar data in a distributed cloud

    No full text
    We present a distributed cloud computing system that is being used for the simulation and analysis of data from the BaBar experiment. The clouds include academic and commercial computing sites across Canada and the United States that are utilized in a unified infrastructure. Users retrieve a virtual machine (VM) with pre-installed application code; they modify the VM for their analysis and store it in a repository. The users prepare their job scripts as they would in a standard batch environment and submit them to a Condor job scheduler. The job scripts contain a link to the VM required for the job. A separate component, called Cloud Scheduler, reads the job queue and boots the requiredVMon one of the available compute clouds. The system is able to utilize clouds configured with various cloud Infrastructure-as-a-Service software such as Nimbus, Eucalyptus and Amazon EC2. We find that the analysis jobs are able to run with high efficiency even if the data is located at distant locations. We will show that the distributed cloud system is an effective environment for user analysis and Monte Carlo simulation.Peer reviewed: YesNRC publication: Ye

    A batch system for HEP applications on a distributed IaaS cloud

    No full text
    The emergence of academic and commercial Infrastructure-as-a-Service (IaaS) clouds is opening access to new resources for the HEP community. In this paper we will describe a system we have developed for creating a single dynamic batch environment spanning multiple IaaS clouds of different types (e.g. Nimbus, OpenNebula, Amazon EC2). A HEP user interacting with the system submits a job description file with a pointer to their VM image. VM images can either be created by users directly or provided to the users. We have created a new software component called Cloud Scheduler that detects waiting jobs and boots the user VM required on any one of the available cloud resources. As the user VMs appear, they are attached to the job queues of a central Condor job scheduler, the job scheduler then submits the jobs to the VMs. The number of VMs available to the user is expanded and contracted dynamically depending on the number of user jobs. We present the motivation and design of the system with particular emphasis on Cloud Scheduler. We show that the system provides the ability to exploit academic and commercial cloud sites in a transparent fashion.Peer reviewed: YesNRC publication: Ye

    The T2K experiment

    Get PDF
    The T2K experiment is a long baseline neutrino oscillation experiment. Its main goal is to measure the last unknown lepton sector mixing angle θ13 by observing νe appearance in a νμ beam. It also aims to make a precision measurement of the known oscillation parameters, and sin22θ23, via νμ disappearance studies. Other goals of the experiment include various neutrino cross-section measurements and sterile neutrino searches. The experiment uses an intense proton beam generated by the J-PARC accelerator in Tokai, Japan, and is composed of a neutrino beamline, a near detector complex (ND280), and a far detector (Super-Kamiokande) located 295 km away from J-PARC. This paper provides a comprehensive review of the instrumentation aspect of the T2K experiment and a summary of the vital information for each subsystem
    corecore