441 research outputs found

    Supersymmetric Higgs pair discovery prospects at hadron colliders

    Get PDF
    We study the potential of hadron colliders in the search for the pair production of neutral Higgs bosons in the framework of the Minimal Supersymmetric Standard Model. Using analytical expressions for the relevant amplitudes, we perform a detailed signal and background analysis, working out efficient kinematical cuts for the extraction of the signal. The important role of squark loop contributions to the signal is emphasised. If the signal is sufficiently enhanced by these contributions, it could even be observable at the next run of the upgraded Tevatron collider in the near future. At the LHC the pair production of light and heavy Higgs bosons might be detectable simultaneously.Comment: 5 pages, hep99, 6 figures; Presented at the International Europhysics Conference on High Energy Physics, Tampere, Finland, 15-21 July 199

    Impact of Seismic Risk on Lifetime Property Values

    Get PDF
    This report presents a methodology for establishing the uncertain net asset value, NAV, of a real-estate investment opportunity considering both market risk and seismic risk for the property. It also presents a decision-making procedure to assist in making real-estate investment choices under conditions of uncertainty and risk-aversion. It is shown that that market risk, as measured by the coefficient of variation of NAV, is at least 0.2 and may exceed 1.0. In a situation of such high uncertainty, where potential gains and losses are large relative to a decision-maker's risk tolerance, it is appropriate to adopt a decision-analysis approach to real-estate investment decision-making. A simple equation for doing so is presented. The decision-analysis approach uses the certainty equivalent, CE, as opposed to NAV as the basis for investment decision-making. That is, when faced with multiple investment alternatives, one should choose the alternative that maximizes CE. It is shown that CE is less than the expected value of NAV by an amount proportional to the variance of NAV and the inverse of the decision-maker's risk tolerance, [rho]. The procedure for establishing NAV and CE is illustrated in parallel demonstrations by CUREE and Kajima research teams. The CUREE demonstration is performed using a real 1960s-era hotel building in Van Nuys, California. The building, a 7-story non-ductile reinforced-concrete moment-frame building, is analyzed using the assembly-based vulnerability (ABV) method, developed in Phase III of the CUREE-Kajima Joint Research Program. The building is analyzed three ways: in its condition prior to the 1994 Northridge Earthquake, with a hypothetical shearwall upgrade, and with earthquake insurance. This is the first application of ABV to a real building, and the first time ABV has incorporated stochastic structural analyses that consider uncertainties in the mass, damping, and force-deformation behavior of the structure, along with uncertainties in ground motion, component damageability, and repair costs. New fragility functions are developed for the reinforced concrete flexural members using published laboratory test data, and new unit repair costs for these components are developed by a professional construction cost estimator. Four investment alternatives are considered: do not buy; buy; buy and retrofit; and buy and insure. It is found that the best alternative for most reasonable values of discount rate, risk tolerance, and market risk is to buy and leave the building as-is. However, risk tolerance and market risk (variability of income) both materially affect the decision. That is, for certain ranges of each parameter, the best investment alternative changes. This indicates that expected-value decision-making is inappropriate for some decision-makers and investment opportunities. It is also found that the majority of the economic seismic risk results from shaking of S[subscript a] < 0.3g, i.e., shaking with return periods on the order of 50 to 100 yr that cause primarily architectural damage, rather than from the strong, rare events of which common probable maximum loss (PML) measurements are indicative. The Kajima demonstration is performed using three Tokyo buildings. A nine-story, steel-reinforced-concrete building built in 1961 is analyzed as two designs: as-is, and with a steel-braced-frame structural upgrade. The third building is 29-story, 1999 steel-frame structure. The three buildings are intended to meet collapse-prevention, life-safety, and operational performance levels, respectively, in shaking with 10%exceedance probability in 50 years. The buildings are assessed using levels 2 and 3 of Kajima's three-level analysis methodology. These are semi-assembly based approaches, which subdivide a building into categories of components, estimate the loss of these component categories for given ground motions, and combine the losses for the entire building. The two methods are used to estimate annualized losses and to create curves that relate loss to exceedance probability. The results are incorporated in the input to a sophisticated program developed by the Kajima Corporation, called Kajima D, which forecasts cash flows for office, retail, and residential projects for purposes of property screening, due diligence, negotiation, financial structuring, and strategic planning. The result is an estimate of NAV for each building. A parametric study of CE for each building is presented, along with a simplified model for calculating CE as a function of mean NAV and coefficient of variation of NAV. The equation agrees with that developed in parallel by the CUREE team. Both the CUREE and Kajima teams collaborated with a number of real-estate investors to understand their seismic risk-management practices, and to formulate and to assess the viability of the proposed decision-making methodologies. Investors were interviewed to elicit their risk-tolerance, r, using scripts developed and presented here in English and Japanese. Results of 10 such interviews are presented, which show that a strong relationship exists between a decision-maker's annual revenue, R, and his or her risk tolerance, [rho is approximately equal to] 0.0075R[superscript 1.34]. The interviews show that earthquake risk is a marginal consideration in current investment practice. Probable maximum loss (PML) is the only earthquake risk parameter these investors consider, and they typically do not use seismic risk at all in their financial analysis of an investment opportunity. For competitive reasons, a public investor interviewed here would not wish to account for seismic risk in his financial analysis unless rating agencies required him to do so or such consideration otherwise became standard practice. However, in cases where seismic risk is high enough to significantly reduce return, a private investor expressed the desire to account for seismic risk via expected annualized loss (EAL) if it were inexpensive to do so, i.e., if the cost of calculating the EAL were not substantially greater than that of PML alone. The study results point to a number of interesting opportunities for future research, namely: improve the market-risk stochastic model, including comparison of actual long-term income with initial income projections; improve the risk-attitude interview; account for uncertainties in repair method and in the relationship between repair cost and loss; relate the damage state of structural elements with points on the force-deformation relationship; examine simpler dynamic analysis as a means to estimate vulnerability; examine the relationship between simplified engineering demand parameters and performance; enhance category-based vulnerability functions by compiling a library of building-specific ones; and work with lenders and real-estate industry analysts to determine the conditions under which seismic risk should be reflected in investors' financial analyses

    B_{s,d} -> l^+ l^- and K_L -> l^+ l^- in SUSY models with non-minimal sources of flavour mixing

    Full text link
    We present a general analysis of B_{s,d}-> l^+ l^- and K_L -> l^+ l^- decays in supersymmetric models with non-minimal sources of flavour mixing. In spite of the existing constraints on off-diagonal squark mass terms, these modes could still receive sizeable corrections, mainly because of Higgs-mediated FCNCs arising at large tan(beta). The severe limits on scenarios with large tan(beta) and non-negligible {tilde d}^i_{R(L)}-{d-tilde}^j_{R(L)} mixing imposed by the present experimental bounds on these modes and Delta B=2 observables are discussed in detail. In particular, we show that scalar-current contributions to K_L -> l^+ l^- and B-{bar B} mixing set non-trivial constraints on the possibility that B_s -> l^+ l^- and B_d -> l^+ l^- receive large corrections.Comment: 18 pages, 4 figures (v2: minor changes, published version

    Sneutrino Mass Measurements at e+e- Linear Colliders

    Get PDF
    It is generally accepted that experiments at an e+e- linear colliders will be able to extract the masses of the selectron as well as the associated sneutrinos with a precision of ~ 1% by determining the kinematic end points of the energy spectrum of daughter electrons produced in their two body decays to a lighter neutralino or chargino. Recently, it has been suggested that by studying the energy dependence of the cross section near the production threshold, this precision can be improved by an order of magnitude, assuming an integrated luminosity of 100 fb^-1. It is further suggested that these threshold scans also allow the masses of even the heavier second and third generation sleptons and sneutrinos to be determined to better than 0.5%. We re-examine the prospects for determining sneutrino masses. We find that the cross sections for the second and third generation sneutrinos are too small for a threshold scan to be useful. An additional complication arises because the cross section for sneutrino pair to decay into any visible final state(s) necessarily depends on an unknown branching fraction, so that the overall normalization in unknown. This reduces the precision with which the sneutrino mass can be extracted. We propose a different strategy to optimize the extraction of m(\tilde{\nu}_\mu) and m(\tilde{\nu}_\tau) via the energy dependence of the cross section. We find that even with an integrated luminosity of 500 fb^-1, these can be determined with a precision no better than several percent at the 90% CL. We also examine the measurement of m(\tilde{\nu}_e) and show that it can be extracted with a precision of about 0.5% (0.2%) with an integrated luminosity of 120 fb^-1 (500 fb^-1).Comment: RevTex, 46 pages, 15 eps figure

    Updated Constraints on the Minimal Supergravity Model

    Get PDF
    Recently, refinements have been made on both the theoretical and experimental determinations of the i.) mass of the lightest Higgs scalar (m_h), ii.) relic density of cold dark matter in the universe (Omega_CDM h^2), iii.) branching fraction for radiative B decay BF(b \to s \gamma), iv.) muon anomalous magnetic moment (a_\mu), and v.) flavor violating decay B_s \to \mu^+\mu^-. Each of these quantities can be predicted in the MSSM, and each depends in a non-trivial way on the spectra of SUSY particles. In this paper, we present updated constraints from each of these quantities on the minimal supergravity (mSUGRA) model as embedded in the computer program ISAJET. The combination of constraints points to certain favored regions of model parameter space where collider and non-accelerator SUSY searches may be more focussed.Comment: 20 pages, 6 figures. Version published in JHE

    Pulse-shape discrimination potential of new scintillator material: La-GPS:Ce

    Get PDF
    (Gd,La)2_2Si2_2O7_7:Ce (La-GPS:Ce) is a new scintillator material with high light output, high energy resolution, and fast decay time. Moreover, the scintillator has a good light output even at high temperature (up to 150^\circC) and is non-hygroscopic in nature; thus, it is especially suitable for underground resource exploration. Particle identification greatly expands the possible applications of scintillator. For resource exploration, the particle identification should be completed in a single pulse only. The pulse-shape discrimination of the scintillator was confirmed. We compared two methods; a double gate method and a digital filter method. Using digital filter method (shape indicator), F-measure to evaluate a separation between α\alpha and γ\gamma particles was obtained to be 0.92 at 0.66 MeVee.Comment: 9 pages, 9 figure

    Expansion of anti-AFP Th1 and Tc1 responses in hepatocellular carcinoma occur in different stages of disease

    Get PDF
    Copyright @ 2010 Cancer Research UK. This work is licensed under the Creative Commons Attribution-NonCommercial-Share Alike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/.Background: α-Fetoprotein (AFP) is a tumour-associated antigen in hepatocellular carcinoma (HCC) and is a target for immunotherapy. However, there is little information on the pattern of CD4 (Th1) and CD8 (Tc1) T-cell response to AFP in patients with HCC and their association with the clinical characteristics of patients. Methods: We therefore analysed CD4 and CD8 T-cell responses to a panel of AFP-derived peptides in a total of 31 HCC patients and 14 controls, using an intracellular cytokine assay for IFN-γ. Results: Anti-AFP Tc1 responses were detected in 28.5% of controls, as well as in 25% of HCC patients with Okuda I (early tumour stage) and in 31.6% of HCC patients with stage II or III (late tumour stages). An anti-AFP Th1 response was detected only in HCC patients (58.3% with Okuda stage I tumours and 15.8% with Okuda stage II or III tumours). Anti-AFP Th1 response was mainly detected in HCC patients who had normal or mildly elevated serum AFP concentrations (P=0.00188), whereas there was no significant difference between serum AFP concentrations in these patients and the presence of an anti-AFP Tc1 response. A Th1 response was detected in 44% of HCC patients with a Child–Pugh A score (early stage of cirrhosis), whereas this was detected in only 15% with a B or C score (late-stage cirrhosis). In contrast, a Tc1 response was detected in 17% of HCC patients with a Child–Pugh A score and in 46% with a B or C score. Conclusion: These results suggest that anti-AFP Th1 responses are more likely to be present in patients who are in an early stage of disease (for both tumour stage and liver cirrhosis), whereas anti-AFP Tc1 responses are more likely to be present in patients with late-stage liver cirrhosis. Therefore, these data provide valuable information for the design of vaccination strategies against HCC.Association for International Cancer Research and Polkemmet Fund, London Clinic

    Probing Slepton Mass Non-Universality at e^+e^- Linear Colliders

    Full text link
    There are many models with non-universal soft SUSY breaking sfermion mass parameters at the grand unification scale. Even in the mSUGRA model scalar mass unification might occur at a scale closer to M_Planck, and renormalization effects would cause a mass splitting at M_GUT. We identify an experimentally measurable quantity Delta that correlates strongly with delta m^2 = m^2_{selectron_R}(M_GUT) - m^2_{selectron_L}(M_GUT), and which can be measured at electron-positron colliders provided both selectrons and the chargino are kinematically accessible. We show that if these sparticle masses can be measured with a precision of 1% at a 500 GeV linear collider, the resulting precision in the determination of Delta may allow experiments to distinguish between scalar mass unification at the GUT scale from the corresponding unification at Q ~ M_Planck. Experimental determination of Delta would also provide a distinction between the mSUGRA model and the recently proposed gaugino-mediation model. Moreover, a measurement of Delta (or a related quantity Delta') would allow for a direct determination of delta m^2.Comment: 15 pages, RevTeX, 4 postscript figure

    Analysis of Long-Lived Slepton NLSP in GMSB model at Linear Collider

    Get PDF
    We performed an analysis on the detection of a long-lived slepton at a linear collider with s=500\sqrt{s}=500 GeV. In GMSB models a long-lived NLSP is predicted for large value of the supersymmetry breaking scale F\sqrt{F}. Furthermore in a large portion of the parameter space this particle is a stau. Such heavy charged particles will leave a track in the tracking volume and hit the muonic detector. In order to disentangle this signal from the muon background, we explore kinematics and particle identification tools: time of flight device, dE/dX and Cerenkov devices. We show that a linear collider will be able to detect long-lived staus with masses up to the kinematical limit of the machine. We also present our estimation of the sensitivity to the stau lifetime.Comment: Minor changes, Ref. 10 fixed. 12 pages, RevTex, 4 eps figure
    corecore