14,331 research outputs found

    The Muon Ionisation Cooling Experiment

    Get PDF
    Outstanding areas of ambiguity within our present understanding of the nature and behaviour of neutrinos warrant the construction of a dedicated future facility capable of investigating the likely parameter space for the theta 1,3 mixing angle, the Dirac CP violating phase and clarifying the neutrino mass hierarchy. A number of potential discovery venues have been proposed including the beta beam, superbeam and neutrino factory accelerator facilities. Of these, the neutrino factory significantly outperforms the others. A neutrino factory will deliver intense beams of 10^21 neutrinos per year, produced from muons decaying in storage rings. This specification, coupled with the constraints of the short muon lifetime warrant the inclusion of a novel cooling channel to reduce the phase space volume of the beam to fall within the acceptance of the acceleration system. Ionisation cooling is the only viable cooling technique with efficacy over the lifetime of the muon, however, it has yet to be demonstrated in practice. In a full cooling channel, a muon beam will traverse a periodic absorber and accelerator lattice consisting of low Z absorbers enclosed by focusing coils and accelerating radio-frequency cavities. Energy loss in the absorbers reduces both transverse and longitudinal momentum. The latter is restored by the accelerating cavities providing a net reduction in transverse momentum and consequently reducing the phase space volume of the muon beam. The Muon Ionisation Cooling Experiment (MICE), under construction at the ISIS synchrotron at Rutherford Appleton Laboratory seeks to provide both a first measurement and systematic study of ionisation cooling, demonstrated within the context of a single cell prototype of a cooling channel. The experiment will evolve incrementally toward its final configuration, with construction and scientific data taking schedules proceeding in parallel. The stated goal of MICE is to measure a fractional change in emittance of order 10% to an error of 1%. This thesis constitutes research into different aspects of MICE: design and implementation of the MICE configuration database, determination of the statistical errors and alignment tolerances associated with cooling measurements made using MICE, simulations and data analysis studying the performance of the luminosity monitor and a first analysis of MICE Step I data. A sophisticated information management solution based on a bi-temporal relational database and web service suite has been designed, implemented and tested. This system will enable the experiment to record geometry, calibration and cabling information in addition to beamline settings (including but not limited to magnet and target settings) and alarm handler limits. This information is essential both to provide an experimental context to the analysis user studying data at a later time and to experimenters seeking to reinstate previous settings. The database also allows corrections to be stored, for example to the geometry, whereby a later survey may clarify an incomplete description. The old and new geometries are both stored with reference to the same period of validity, indexed by the time they are added to the configuration database. This allows MICE users to recall both the best-known geometry of the experiment at a given time by default, as well as the history of what was known about the geometry as required. Such functionality is two dimensional in time, hence the choice of a bi-temporal database paradigm, enabling the collaboration to run new analyses with the most up to date knowledge of the experimental configuration and also repeat previous analyses which were based upon incomplete information. From Step III of MICE onwards, the phase space volume, or emittance, of the beam will be measured by two scintillating fibre trackers placed before and after the cooling cell. Since the two emittance measurements are made upon a similar sample of muons, the measurement errors are influenced by correlations. This thesis will show through an empirical approach that correlations act to reduce the statistical error by an order of magnitude. In order to meet its goals MICE must also quantify its systematic errors. A misalignment study is presented which investigates the sensitivity of the scintillating fibre trackers to translational and rotational misalignment. Tolerance limits of 1 mm and 0.3 mrad respectively allow MICE to meet the requirement that systematic errors due to misalignment of the trackers contribute no more than 10% of the total error. At present, MICE is in Step I of its development: building and commissioning a muon beamline which will be presented to a cooling channel in later stages of MICE. A luminosity monitor has been built and commissioned to provide a measurement of particle production from the target, normalise particle rate at all detectors and verify the physics models which will be used throughout the lifetime of MICE and onwards through to the development of a neutrino factory. Particle identification detectors have already been installed and allow the species of particles to be distinguished according to their time of flight. This has enabled a study of particle identification, particle momenta and simulated and experimental beam profiles at each time of flight detector. The widths of the beam profiles are sensitive to multiple scattering and magnetic effects, providing an opportunity to quantify the success of the simulations in modelling these behaviours. Such a comparison was also used to detect offsets in the beam centre position which can be caused by misalignments of the detectors or relative misalignments in magnet positions causing asymmetrical skew in the magnetic axis. These effects were quantified in this analysis. Particle identification combined with the earlier statistical analysis will be used to show that the number of muons required to meet the statistical requirements of MICE can be produced within a realistic time frame for each beam configuration considered

    Hybrid algorithms for distributed constraint satisfaction.

    Get PDF
    A Distributed Constraint Satisfaction Problem (DisCSP) is a CSP which is divided into several inter-related complex local problems, each assigned to a different agent. Thus, each agent has knowledge of the variables and corresponding domains of its local problem together with the constraints relating its own variables (intra-agent constraints) and the constraints linking its local problem to other local problems (inter-agent constraints). DisCSPs have a variety of practical applications including, for example, meeting scheduling and sensor networks. Existing approaches to Distributed Constraint Satisfaction can be mainly classified into two families of algorithms: systematic search and local search. Systematic search algorithms are complete but may take exponential time. Local search algorithms often converge quicker to a solution for large problems but are incomplete. Problem solving could be improved through using hybrid algorithms combining the completeness of systematic search with the speed of local search. This thesis explores hybrid (systematic + local search) algorithms which cooperate to solve DisCSPs. Three new hybrid approaches which combine both systematic and local search for Distributed Constraint Satisfaction are presented: (i) DisHyb; (ii) Multi-Hyb and; (iii) Multi-HDCS. These approaches use distributed local search to gather information about difficult variables and best values in the problem. Distributed systematic search is run with a variable and value ordering determined by the knowledge learnt through local search. Two implementations of each of the three approaches are presented: (i) using penalties as the distributed local search strategy and; (ii) using breakout as the distributed local search strategy. The three approaches are evaluated on several problem classes. The empirical evaluation shows these distributed hybrid approaches to significantly outperform both systematic and local search DisCSP algorithms. DisHyb, Multi-Hyb and Multi-HDCS are shown to substantially speed-up distributed problem solving with distributed systematic search taking less time to run by using the information learnt by distributed local search. As a consequence, larger problems can now be solved in a more practical timeframe

    Starburst or AGN Dominance in Submillimetre-Luminous Candidate AGN?

    Full text link
    It is widely believed that ultraluminous infrared (IR) galaxies and active galactic nuclei (AGN) activity are triggered by galaxy interactions and merging, with the peak of activity occurring at z~2, where submillimetre galaxies are thousands of times more numerous than local ULIRGs. In this evolutionary picture, submillimetre galaxies (SMGs) would host an AGN, which would eventually grow a black hole (BH) strong enough to blow off all of the gas and dust leaving an optically luminous QSO. To probe this evolutionary sequence we have focussed on the 'missing link' sources, which demonstrate both strong starburst (SB) and AGN signatures, in order to determine if the SB is the main power source even in SMGs when we have evidence that an AGN is present from their IRAC colours. The best way to determine if a dominant AGN is present is to look for their signatures in the mid-infrared with the Spitzer IRS, since often even deep X-ray observations miss identifying the presence of AGN in heavily dust-obscured SMGs. We present the results of our audit of the energy balance between star-formation and AGN within this special sub-population of SMGs -- where the BH has grown appreciably to begin heating the dust emission.Comment: 2 pages, 1 figure. To appear in "Hunting for the Dark: The Hidden Side of Galaxy Formation", Malta, 19-23 Oct. 2009, eds. V.P. Debattista and C.C. Popescu, AIP Conf. Ser., in pres

    The Bankart repair illustrated in cross-section

    Full text link
    The Bankart repair for chronic anterior shoulder insta bility effectively addresses the pathologic components responsible for repeated dislocation or subluxation. However, contrary to popular belief, the Bankart repair does not precisely restore the premorbid anatomy. The capsule is reattached to the boney rim of the anteroin ferior glenoid deep to and lateral to the torn cartilagen ous labrum, thus excluding the labrum from the joint anteriorly. This was demonstrated by cross-sectional cadaver dissections performed to illustrate this complex surgical anatomy to orthopaedic residents in training. In addition, when correlated with double-contrast com puterized axial tomography, we noted five predominant patterns of anatomical lesions which by common use have been collectively termed the "Bankart lesion." These are: 1) the rare "classic" Bankart lesion in which the cartilagenous labrum and capsular origin are torn from the glenoid rim; 2) the capsule stripped from the scapular neck and the labrum detached from the gle noid rim remaining fixed to the overlying capsule; 3) the capsule stripped from the scapular neck and the labrum separated from the glenoid rim, but separately; 4) the labrum abraded away and no longer radiographically detectable; and 5) glenoid rim fracture.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/66610/2/10.1177_036354658901700507.pd

    Time to Conversion of Hemi/total Shoulder Arthroplasty to Reverse Total Shoulder Arthroplasty; A Systematic Review of Longevity and Factors Influencing Conversion

    Get PDF
    Background: The primary purpose of this study was to determine the average time from hemiarthoplasty (HA) or total shoulder arthroplasty (TSA) to conversion to reverse total shoulder arthroplasty (RTSA). The secondary purpose of this study was to determine the factors leading to conversion to RTSA. Methods: A review of the literature regarding the existing evidence for conversion of HA/TSA to RTSA was performed using the Cochrane Database of Systematic Reviews, the Cochrane Central Register of Controlled Trials, PubMed (1980-present), and MEDLINE (1980-present). nclusion criteria were as follows: reporting of conversion of a HA or TSA to RTSA with a follow up of greater than 24 months, English language, and human studies. Excluded were articles that did not mention a time to conversion surgery. Results: One hundred studies were initially retrieved with 3 meeting the inclusion criteria. The review included 99 patients (31 male, 68 female) with a mean age 69 (range: 67 – 73). The average follow up was 35.8 months (range: 32.3 – 37.4). The weighted mean time to conversion of HA/TSA to RTSA was 36.8 months. Rotator cuff failure was the indication for conversion in 19 66% of cases (65/99), while component loosening (glenoid or humeral stem) was the indication in 14% (14/99) of cases. Conclusions: Time to conversion of HA/TSA to RTSA is reported to be 36.8 months on average. The most common indication for conversion to RTSA was rotator cuff failure, suggesting the importance of evaluating pre-operative rotator cuff integrity when performing a primary HA or TSA
    • …
    corecore