444 research outputs found

    Dual cathode system for electron beam instruments

    Get PDF
    An electron beam source having a single electron optical axis is provided with two coplanar cathodes equally spaced on opposite sides from the electron optical axis. A switch permits selecting either cathode, and a deflection system comprised of electromagnets, each with separate pole pieces equally spaced from the plane of the cathodes and electron optical axis, first deflects the electron beam from a selected cathode toward the electron optical axis, and then in an opposite direction into convergence with the electron optical axis. The result is that the electron beam from one selected cathode undergoes a sigmoid deflection in two opposite directions, like the letter S, with the sigmoid deflection of each being a mirror image of the other

    Supersymmetry Without Prejudice at the 7 TeV LHC

    Full text link
    We investigate the model independent nature of the Supersymmetry search strategies at the 7 TeV LHC. To this end, we study the missing-transverse-energy-based searches developed by the ATLAS Collaboration that were essentially designed for mSUGRA. We simulate the signals for ~71k models in the 19-dimensional parameter space of the pMSSM. These models have been found to satisfy existing experimental and theoretical constraints and provide insight into general features of the MSSM without reference to a particular SUSY breaking scenario or any other assumptions at the GUT scale. Using backgrounds generated by ATLAS, we find that imprecise knowledge of these estimated backgrounds is a limiting factor in the potential discovery of these models and that some channels become systematics-limited at larger luminosities. As this systematic error is varied between 20-100%, roughly half to 90% of this model sample is observable with significance S>5 for 1 fb^{-1} of integrated luminosity. We then examine the model characteristics for the cases which cannot be discovered and find several contributing factors. We find that a blanket statement that squarks and gluinos are excluded with masses below a specific value cannot be made. We next explore possible modifications to the kinematic cuts in these analyses that may improve the pMSSM model coverage. Lastly, we examine the implications of a null search at the 7 TeV LHC in terms of the degree of fine-tuning that would be present in this model set and for sparticle production at the 500 GeV and 1 TeV Linear Collider.Comment: 51 pages, 26 figure

    Criteria for the use of omics-based predictors in clinical trials.

    Get PDF
    The US National Cancer Institute (NCI), in collaboration with scientists representing multiple areas of expertise relevant to 'omics'-based test development, has developed a checklist of criteria that can be used to determine the readiness of omics-based tests for guiding patient care in clinical trials. The checklist criteria cover issues relating to specimens, assays, mathematical modelling, clinical trial design, and ethical, legal and regulatory aspects. Funding bodies and journals are encouraged to consider the checklist, which they may find useful for assessing study quality and evidence strength. The checklist will be used to evaluate proposals for NCI-sponsored clinical trials in which omics tests will be used to guide therapy

    The Impact of Operation Bushmaster on Medical Student Decision-making in a High-Stress, Operational Environment.

    Get PDF
    INTRODUCTION: Operation Bushmaster is a high-fidelity military medical field practicum for fourth-year medical students at the Uniformed Services University. During Operation Bushmaster, students treat live-actor and mannequin-based simulated patients in wartime scenarios throughout the five-day practicum. This study explored the impact of participating in Operation Bushmaster on students\u27 decision-making in a high-stress, operational environment, a crucial aspect of their future role as military medical officers. MATERIALS AND METHODS: A panel of emergency medicine physician experts used a modified Delphi technique to develop a rubric to evaluate the participants\u27 decision-making abilities under stress. The participants\u27 decision-making was assessed before and after participating in either Operation Bushmaster (control group) or completing asynchronous coursework (experimental group). A paired-samples t-test was conducted to detect any differences between the means of the participants\u27 pre- and posttest scores. This study was approved by the Institutional Review Board at Uniformed Services University #21-13079. RESULTS: A significant difference was detected in the pre- and posttest scores of students who attended Operation Bushmaster (P \u3c .001), while there was no significant difference in the pre- and posttest scores of students who completed online, asynchronous coursework (P = .554). CONCLUSION: Participating in Operation Bushmaster significantly improved the control group participants\u27 medical decision-making under stress. The results of this study confirm the effectiveness of high-fidelity simulation-based education for teaching decision-making skills to military medical students

    Comparing Competing Theories on the Causes of Mandate Perceptions

    Get PDF
    The discussion of presidential mandates is as certain as a presidential election itself. Journalists inevitably discuss whether the president-elect has a popular mandate. Because they see elections as too complex to allow the public to send a unitary signal, political scientists are more skeptical of mandates. Mandates, however, have received new attention by scholars asking whether perceptions of mandate arise and lead representatives to act as if voters sent a policy directive. Two explanations have emerged to account for why elected officials might react to such perceptions. One focuses on the President’s strategic decision to declare a mandate, the second on how members of Congress read signals of changing preferences in the electorate from their own election results. We test these competing views to see which more accurately explains how members of Congress act in support of a perceived mandate. The results indicate that members respond more to messages about changing preferences than to the president’s mandate declaration

    K-corrections and spectral templates of Type Ia supernovae

    Get PDF
    With the advent of large dedicated Type Ia supernova (SN Ia) surveys, K-corrections of SNe Ia and their uncertainties have become especially important in the determination of cosmological parameters. While K-corrections are largely driven by SN Ia broad-band colors, it is shown here that the diversity in spectral features of SNe Ia can also be important. For an individual observation, the statistical errors from the inhomogeneity in spectral features range from 0.01 (where the observed and rest-frame filters are aligned) to 0.04 (where the observed and rest-frame filters are misaligned). To minimize the systematic errors caused by an assumed SN Ia spectral energy distribution (SED), we outline a prescription for deriving a mean spectral template time series which incorporates a large and heterogeneous sample of observed spectra. We then remove the effects of broad-band colors and measure the remaining uncertainties in the K-corrections associated with the diversity in spectral features. Finally, we present a template spectroscopic sequence near maximum light for further improvement on the K-correction estimate. A library of ~600 observed spectra of ~100 SNe Ia from heterogeneous sources is used for the analysis.Comment: 40 pages, 14 figures, accepted for publication in Ap

    Supersymmetry Without Prejudice at the LHC

    Full text link
    The discovery and exploration of Supersymmetry in a model-independent fashion will be a daunting task due to the large number of soft-breaking parameters in the MSSM. In this paper, we explore the capability of the ATLAS detector at the LHC (s=14\sqrt s=14 TeV, 1 fb−1^{-1}) to find SUSY within the 19-dimensional pMSSM subspace of the MSSM using their standard transverse missing energy and long-lived particle searches that were essentially designed for mSUGRA. To this end, we employ a set of ∼71\sim 71k previously generated model points in the 19-dimensional parameter space that satisfy all of the existing experimental and theoretical constraints. Employing ATLAS-generated SM backgrounds and following their approach in each of 11 missing energy analyses as closely as possible, we explore all of these 7171k model points for a possible SUSY signal. To test our analysis procedure, we first verify that we faithfully reproduce the published ATLAS results for the signal distributions for their benchmark mSUGRA model points. We then show that, requiring all sparticle masses to lie below 1(3) TeV, almost all(two-thirds) of the pMSSM model points are discovered with a significance S>5S>5 in at least one of these 11 analyses assuming a 50\% systematic error on the SM background. If this systematic error can be reduced to only 20\% then this parameter space coverage is increased. These results are indicative that the ATLAS SUSY search strategy is robust under a broad class of Supersymmetric models. We then explore in detail the properties of the kinematically accessible model points which remain unobservable by these search analyses in order to ascertain problematic cases which may arise in general SUSY searches.Comment: 69 pages, 40 figures, Discussion adde

    Supernova Simulations and Strategies For the Dark Energy Survey

    Get PDF
    We present an analysis of supernova light curves simulated for the upcoming Dark Energy Survey (DES) supernova search. The simulations employ a code suite that generates and fits realistic light curves in order to obtain distance modulus/redshift pairs that are passed to a cosmology fitter. We investigated several different survey strategies including field selection, supernova selection biases, and photometric redshift measurements. Using the results of this study, we chose a 30 square degree search area in the griz filter set. We forecast 1) that this survey will provide a homogeneous sample of up to 4000 Type Ia supernovae in the redshift range 0.05<z<1.2, and 2) that the increased red efficiency of the DES camera will significantly improve high-redshift color measurements. The redshift of each supernova with an identified host galaxy will be obtained from spectroscopic observations of the host. A supernova spectrum will be obtained for a subset of the sample, which will be utilized for control studies. In addition, we have investigated the use of combined photometric redshifts taking into account data from both the host and supernova. We have investigated and estimated the likely contamination from core-collapse supernovae based on photometric identification, and have found that a Type Ia supernova sample purity of up to 98% is obtainable given specific assumptions. Furthermore, we present systematic uncertainties due to sample purity, photometric calibration, dust extinction priors, filter-centroid shifts, and inter-calibration. We conclude by estimating the uncertainty on the cosmological parameters that will be measured from the DES supernova data.Comment: 46 pages, 30 figures, resubmitted to ApJ as Revision 2 (final author revision), which has subtle editorial differences compared to the published paper (ApJ, 753, 152). Note that this posting includes PDF only due to a bug in either the latex macros or the arXiv submission system. The source files are available in the DES document database: http://des-docdb.fnal.gov/cgi-bin/ShowDocument?docid=624

    Criteria for the use of omics-based predictors in clinical trials: explanation and elaboration

    Full text link
    Abstract High-throughput ‘omics’ technologies that generate molecular profiles for biospecimens have been extensively used in preclinical studies to reveal molecular subtypes and elucidate the biological mechanisms of disease, and in retrospective studies on clinical specimens to develop mathematical models to predict clinical endpoints. Nevertheless, the translation of these technologies into clinical tests that are useful for guiding management decisions for patients has been relatively slow. It can be difficult to determine when the body of evidence for an omics-based test is sufficiently comprehensive and reliable to support claims that it is ready for clinical use, or even that it is ready for definitive evaluation in a clinical trial in which it may be used to direct patient therapy. Reasons for this difficulty include the exploratory and retrospective nature of many of these studies, the complexity of these assays and their application to clinical specimens, and the many potential pitfalls inherent in the development of mathematical predictor models from the very high-dimensional data generated by these omics technologies. Here we present a checklist of criteria to consider when evaluating the body of evidence supporting the clinical use of a predictor to guide patient therapy. Included are issues pertaining to specimen and assay requirements, the soundness of the process for developing predictor models, expectations regarding clinical study design and conduct, and attention to regulatory, ethical, and legal issues. The proposed checklist should serve as a useful guide to investigators preparing proposals for studies involving the use of omics-based tests. The US National Cancer Institute plans to refer to these guidelines for review of proposals for studies involving omics tests, and it is hoped that other sponsors will adopt the checklist as well.http://deepblue.lib.umich.edu/bitstream/2027.42/134536/1/12916_2013_Article_1104.pd
    • …
    corecore