382 research outputs found

    p p -> j j e+/- mu+/- nu nu and j j e+/- mu-/+ nu nu at O(\alpha_{em}^6) and O(\alpha_{em}^4 \alpha_s^2) for the Study of the Quartic Electroweak Gauge Boson Vertex at LHC

    Full text link
    We analyze the potential of the CERN Large Hadron Collider (LHC) to study the structure of quartic vector-boson interactions through the pair production of electroweak gauge bosons via weak boson fusion q q -> q q W W. In order to study these couplings we have performed a partonic level calculation of all processes p p -> j j e+/- mu+/- nu nu and pp -> j j e+/- mu-/+ nu nu at the LHC using the exact matrix elements at O(\alpha_{em}^6) and O(\alpha_{em}^4 \alpha_s^2) as well as a full simulation of the t tbar plus 0 to 2 jets backgrounds. A complete calculation of the scattering amplitudes is necessary not only for a correct description of the process but also to preserve all correlations between the final state particles which can be used to enhance the signal. Our analyses indicate that the LHC can improve by more than one order of magnitude the bounds arising at present from indirect measurements.Comment: 26 pages, 8 figures, revised version with some typos corrected, and some comments and references adde

    Impact of Seismic Risk on Lifetime Property Values

    Get PDF
    This report presents a methodology for establishing the uncertain net asset value, NAV, of a real-estate investment opportunity considering both market risk and seismic risk for the property. It also presents a decision-making procedure to assist in making real-estate investment choices under conditions of uncertainty and risk-aversion. It is shown that that market risk, as measured by the coefficient of variation of NAV, is at least 0.2 and may exceed 1.0. In a situation of such high uncertainty, where potential gains and losses are large relative to a decision-maker's risk tolerance, it is appropriate to adopt a decision-analysis approach to real-estate investment decision-making. A simple equation for doing so is presented. The decision-analysis approach uses the certainty equivalent, CE, as opposed to NAV as the basis for investment decision-making. That is, when faced with multiple investment alternatives, one should choose the alternative that maximizes CE. It is shown that CE is less than the expected value of NAV by an amount proportional to the variance of NAV and the inverse of the decision-maker's risk tolerance, [rho]. The procedure for establishing NAV and CE is illustrated in parallel demonstrations by CUREE and Kajima research teams. The CUREE demonstration is performed using a real 1960s-era hotel building in Van Nuys, California. The building, a 7-story non-ductile reinforced-concrete moment-frame building, is analyzed using the assembly-based vulnerability (ABV) method, developed in Phase III of the CUREE-Kajima Joint Research Program. The building is analyzed three ways: in its condition prior to the 1994 Northridge Earthquake, with a hypothetical shearwall upgrade, and with earthquake insurance. This is the first application of ABV to a real building, and the first time ABV has incorporated stochastic structural analyses that consider uncertainties in the mass, damping, and force-deformation behavior of the structure, along with uncertainties in ground motion, component damageability, and repair costs. New fragility functions are developed for the reinforced concrete flexural members using published laboratory test data, and new unit repair costs for these components are developed by a professional construction cost estimator. Four investment alternatives are considered: do not buy; buy; buy and retrofit; and buy and insure. It is found that the best alternative for most reasonable values of discount rate, risk tolerance, and market risk is to buy and leave the building as-is. However, risk tolerance and market risk (variability of income) both materially affect the decision. That is, for certain ranges of each parameter, the best investment alternative changes. This indicates that expected-value decision-making is inappropriate for some decision-makers and investment opportunities. It is also found that the majority of the economic seismic risk results from shaking of S[subscript a] < 0.3g, i.e., shaking with return periods on the order of 50 to 100 yr that cause primarily architectural damage, rather than from the strong, rare events of which common probable maximum loss (PML) measurements are indicative. The Kajima demonstration is performed using three Tokyo buildings. A nine-story, steel-reinforced-concrete building built in 1961 is analyzed as two designs: as-is, and with a steel-braced-frame structural upgrade. The third building is 29-story, 1999 steel-frame structure. The three buildings are intended to meet collapse-prevention, life-safety, and operational performance levels, respectively, in shaking with 10%exceedance probability in 50 years. The buildings are assessed using levels 2 and 3 of Kajima's three-level analysis methodology. These are semi-assembly based approaches, which subdivide a building into categories of components, estimate the loss of these component categories for given ground motions, and combine the losses for the entire building. The two methods are used to estimate annualized losses and to create curves that relate loss to exceedance probability. The results are incorporated in the input to a sophisticated program developed by the Kajima Corporation, called Kajima D, which forecasts cash flows for office, retail, and residential projects for purposes of property screening, due diligence, negotiation, financial structuring, and strategic planning. The result is an estimate of NAV for each building. A parametric study of CE for each building is presented, along with a simplified model for calculating CE as a function of mean NAV and coefficient of variation of NAV. The equation agrees with that developed in parallel by the CUREE team. Both the CUREE and Kajima teams collaborated with a number of real-estate investors to understand their seismic risk-management practices, and to formulate and to assess the viability of the proposed decision-making methodologies. Investors were interviewed to elicit their risk-tolerance, r, using scripts developed and presented here in English and Japanese. Results of 10 such interviews are presented, which show that a strong relationship exists between a decision-maker's annual revenue, R, and his or her risk tolerance, [rho is approximately equal to] 0.0075R[superscript 1.34]. The interviews show that earthquake risk is a marginal consideration in current investment practice. Probable maximum loss (PML) is the only earthquake risk parameter these investors consider, and they typically do not use seismic risk at all in their financial analysis of an investment opportunity. For competitive reasons, a public investor interviewed here would not wish to account for seismic risk in his financial analysis unless rating agencies required him to do so or such consideration otherwise became standard practice. However, in cases where seismic risk is high enough to significantly reduce return, a private investor expressed the desire to account for seismic risk via expected annualized loss (EAL) if it were inexpensive to do so, i.e., if the cost of calculating the EAL were not substantially greater than that of PML alone. The study results point to a number of interesting opportunities for future research, namely: improve the market-risk stochastic model, including comparison of actual long-term income with initial income projections; improve the risk-attitude interview; account for uncertainties in repair method and in the relationship between repair cost and loss; relate the damage state of structural elements with points on the force-deformation relationship; examine simpler dynamic analysis as a means to estimate vulnerability; examine the relationship between simplified engineering demand parameters and performance; enhance category-based vulnerability functions by compiling a library of building-specific ones; and work with lenders and real-estate industry analysts to determine the conditions under which seismic risk should be reflected in investors' financial analyses

    Detection of soluble interleukin-2 receptor and soluble intercellular adhesion molecule-1 in the effusion of otitis media with effusion

    Get PDF
    We measured sIL-2R, TNF-α and sICAM-1 in the sera and middle ear effusions (MEEs) of patients with otitis media with effusion (OME). Although there was no signmcant difference between the sIL-2R levels of the serous and mucoid MEEs, they were significantly higher than serum sIL-2R levels of OME patients and healthy controls. TNF-α levels of the mucoid MEEs were significantly higher than those of the serous type. However, TNF-α was rarely detected in the sera of OME patients or healthy controls. We observed significant differences between the serous and mucoid MEEs with respect to their sICAM-1 levels, which were also higher than serum slCAM-1 levels of OME patients and healthy controls. Our findings suggested that IL-2, TNF-α and ICAM-1 could be significantly involved in the pathogenesis of OME through the cytokine network

    Probing Slepton Mass Non-Universality at e^+e^- Linear Colliders

    Full text link
    There are many models with non-universal soft SUSY breaking sfermion mass parameters at the grand unification scale. Even in the mSUGRA model scalar mass unification might occur at a scale closer to M_Planck, and renormalization effects would cause a mass splitting at M_GUT. We identify an experimentally measurable quantity Delta that correlates strongly with delta m^2 = m^2_{selectron_R}(M_GUT) - m^2_{selectron_L}(M_GUT), and which can be measured at electron-positron colliders provided both selectrons and the chargino are kinematically accessible. We show that if these sparticle masses can be measured with a precision of 1% at a 500 GeV linear collider, the resulting precision in the determination of Delta may allow experiments to distinguish between scalar mass unification at the GUT scale from the corresponding unification at Q ~ M_Planck. Experimental determination of Delta would also provide a distinction between the mSUGRA model and the recently proposed gaugino-mediation model. Moreover, a measurement of Delta (or a related quantity Delta') would allow for a direct determination of delta m^2.Comment: 15 pages, RevTeX, 4 postscript figure

    Sneutrino Mass Measurements at e+e- Linear Colliders

    Get PDF
    It is generally accepted that experiments at an e+e- linear colliders will be able to extract the masses of the selectron as well as the associated sneutrinos with a precision of ~ 1% by determining the kinematic end points of the energy spectrum of daughter electrons produced in their two body decays to a lighter neutralino or chargino. Recently, it has been suggested that by studying the energy dependence of the cross section near the production threshold, this precision can be improved by an order of magnitude, assuming an integrated luminosity of 100 fb^-1. It is further suggested that these threshold scans also allow the masses of even the heavier second and third generation sleptons and sneutrinos to be determined to better than 0.5%. We re-examine the prospects for determining sneutrino masses. We find that the cross sections for the second and third generation sneutrinos are too small for a threshold scan to be useful. An additional complication arises because the cross section for sneutrino pair to decay into any visible final state(s) necessarily depends on an unknown branching fraction, so that the overall normalization in unknown. This reduces the precision with which the sneutrino mass can be extracted. We propose a different strategy to optimize the extraction of m(\tilde{\nu}_\mu) and m(\tilde{\nu}_\tau) via the energy dependence of the cross section. We find that even with an integrated luminosity of 500 fb^-1, these can be determined with a precision no better than several percent at the 90% CL. We also examine the measurement of m(\tilde{\nu}_e) and show that it can be extracted with a precision of about 0.5% (0.2%) with an integrated luminosity of 120 fb^-1 (500 fb^-1).Comment: RevTex, 46 pages, 15 eps figure

    Updated Constraints on the Minimal Supergravity Model

    Get PDF
    Recently, refinements have been made on both the theoretical and experimental determinations of the i.) mass of the lightest Higgs scalar (m_h), ii.) relic density of cold dark matter in the universe (Omega_CDM h^2), iii.) branching fraction for radiative B decay BF(b \to s \gamma), iv.) muon anomalous magnetic moment (a_\mu), and v.) flavor violating decay B_s \to \mu^+\mu^-. Each of these quantities can be predicted in the MSSM, and each depends in a non-trivial way on the spectra of SUSY particles. In this paper, we present updated constraints from each of these quantities on the minimal supergravity (mSUGRA) model as embedded in the computer program ISAJET. The combination of constraints points to certain favored regions of model parameter space where collider and non-accelerator SUSY searches may be more focussed.Comment: 20 pages, 6 figures. Version published in JHE

    Bounds on second generation scalar leptoquarks from the anomalous magnetic moment of the muon

    Get PDF
    We calculate the contribution of second generation scalar leptoquarks to the anomalous magnetic moment of the muon (AMMM). In the near future, E-821 at Brookhaven will reduce the experimental error on this parameter to Δaμexp<4×1010\Delta a_\mu^{\rm exp}<4\times 10^{-10}, an improvement of 20 over its current value. With this new experimental limit we obtain a lower mass limit of mΦL>186m_{\Phi_L}>186\ GeV for the second generation scalar leptoquark, when its Yukawa-like coupling λΦL\lambda_{\Phi_L}\ to quarks and leptons is taken to be of the order of the electroweak coupling g2g_2.Comment: 5 pages, plain tex, 1 figure (not included available under request

    Analysis of Long-Lived Slepton NLSP in GMSB model at Linear Collider

    Get PDF
    We performed an analysis on the detection of a long-lived slepton at a linear collider with s=500\sqrt{s}=500 GeV. In GMSB models a long-lived NLSP is predicted for large value of the supersymmetry breaking scale F\sqrt{F}. Furthermore in a large portion of the parameter space this particle is a stau. Such heavy charged particles will leave a track in the tracking volume and hit the muonic detector. In order to disentangle this signal from the muon background, we explore kinematics and particle identification tools: time of flight device, dE/dX and Cerenkov devices. We show that a linear collider will be able to detect long-lived staus with masses up to the kinematical limit of the machine. We also present our estimation of the sensitivity to the stau lifetime.Comment: Minor changes, Ref. 10 fixed. 12 pages, RevTex, 4 eps figure

    Higgs-mediated leptonic decays of B_s and B_d mesons as probes of supersymmetry

    Get PDF
    If tan(beta) is large, down-type quark mass matrices and Yukawa couplings cannot be simultaneously diagonalized, and flavour violating couplings of the neutral Higgs bosons are induced at the 1-loop level. These couplings lead to Higgs-mediated contributions to the decays B_s -> mu+ mu- and B_d -> tau+ tau-, at a level that might be of interest for the current Tevatron run, or possibly, at B-factories. We evaluate the branching ratios for these decays within the framework of minimal gravity-, gauge- and anomaly-mediated SUSY breaking models, and also in SU(5) supergravity models with non-universal gaugino mass parameters at the GUT scale. We find that the contribution from gluino loops, which seems to have been left out in recent phenomenological analyses, is significant. We explore how the branching fraction varies in these models, emphasizing parameter regions consistent with other observations.Comment: Revised to accommodate minor changes in original text and update reference
    corecore