391 research outputs found

    Radiative Corrections to Neutralino and Chargino Masses in the Minimal Supersymmetric Model

    Full text link
    We determine the neutralino and chargino masses in the MSSM at one-loop. We perform a Feynman diagram calculation in the on-shell renormalization scheme, including quark/squark and lepton/slepton loops. We find generically the corrections are of order 6%. For a 20 GeV neutralino the corrections can be larger than 20%. The corrections change the region of ÎŒ, M2, tan⁥ÎČ\mu,\ M_2,\ \tan\beta parameter space which is ruled out by LEP data. We demonstrate that, e.g., for a given ÎŒ\mu and tan⁥ÎČ\tan\beta the lower limit on the parameter M2M_2 can shift by 20 GeV.Comment: 11 pages, JHU-TIPAC-930030, PURD-TH-93-13, uses epsf.sty, 6 uuencoded postscript figures, added one sentence and a referenc

    Complete two-loop effective potential approximation to the lightest Higgs scalar boson mass in supersymmetry

    Get PDF
    I present a method for accurately calculating the pole mass of the lightest Higgs scalar boson in supersymmetric extensions of the Standard Model, using a mass-independent renormalization scheme. The Higgs scalar self-energies are approximated by supplementing the exact one-loop results with the second derivatives of the complete two-loop effective potential in Landau gauge. I discuss the dependence of this approximation on the choice of renormalization scale, and note the existence of particularly poor choices which fortunately can be easily identified and avoided. For typical input parameters, the variation in the calculated Higgs mass over a wide range of renormalization scales is found to be of order a few hundred MeV or less, and is significantly improved over previous approximations.Comment: 5 pages, 1 figure. References added, sample test model parameters listed, minor wording change

    Relating the CMSSM and SUGRA models with GUT scale and Super-GUT scale Supersymmetry Breaking

    Full text link
    While the constrained minimal supersymmetric standard model (CMSSM) with universal gaugino masses, m_{1/2}, scalar masses, m_0, and A-terms, A_0, defined at some high energy scale (usually taken to be the GUT scale) is motivated by general features of supergravity models, it does not carry all of the constraints imposed by minimal supergravity (mSUGRA). In particular, the CMSSM does not impose a relation between the trilinear and bilinear soft supersymmetry breaking terms, B_0 = A_0 - m_0, nor does it impose the relation between the soft scalar masses and the gravitino mass, m_0 = m_{3/2}. As a consequence, tan(\beta) is computed given values of the other CMSSM input parameters. By considering a Giudice-Masiero (GM) extension to mSUGRA, one can introduce new parameters to the K\"ahler potential which are associated with the Higgs sector and recover many of the standard CMSSM predictions. However, depending on the value of A_0, one may have a gravitino or a neutralino dark matter candidate. We also consider the consequences of imposing the universality conditions above the GUT scale. This GM extension provides a natural UV completion for the CMSSM.Comment: 16 pages, 11 figures; added erratum correcting several equations and results in Sec.2, Sec.3 and 4 remain unaffected and conclusions unchange

    Formation and growth of nucleated particles into cloud condensation nuclei: Model-measurement comparison

    Get PDF
    Aerosol nucleation occurs frequently in the atmosphere and is an important source of particle number. Observations suggest that nucleated particles are capable of growing to sufficiently large sizes that they act as cloud condensation nuclei (CCN), but some global models have reported that CCN concentrations are only modestly sensitive to large changes in nucleation rates. Here we present a novel approach for using long-term size distribution observations to evaluate a global aerosol model's ability to predict formation rates of CCN from nucleation and growth events. We derive from observations at five locations nucleation-relevant metrics such as nucleation rate of particles at diameter of 3 nm (J3), diameter growth rate (GR), particle survival probability (SP), condensation and coagulation sinks, and CCN formation rate (J100). These quantities are also derived for a global microphysical model, GEOS-Chem-TOMAS, and compared to the observations on a daily basis. Using GEOS-Chem-TOMAS, we simulate nucleation events predicted by ternary (with a 10−5 tuning factor) or activation nucleation over one year and find that the model slightly understates the observed annual-average CCN formation mostly due to bias in the nucleation rate predictions, but by no more than 50% in the ternary simulations. At the two locations expected to be most impacted by large-scale regional nucleation, HyytiĂ€lĂ€ and San Pietro Capofiume, predicted annual-average CCN formation rates are within 34 and 2% of the observations, respectively. Model-predicted annual-average growth rates are within 25% across all sites but also show a slight tendency to underestimate the observations, at least in the ternary nucleation simulations. On days that the growing nucleation mode reaches 100 nm, median single-day survival probabilities to 100 nm for the model and measurements range from less than 1–6% across the five locations we considered; however, this does not include particles that may eventually grow to 100 nm after the first day. This detailed exploration of new particle formation and growth dynamics adds support to the use of global models as tools for assessing the contribution of microphysical processes such as nucleation to the total number and CCN budget

    Colliders and Cosmology

    Full text link
    Dark matter in variations of constrained minimal supersymmetric standard models will be discussed. Particular attention will be given to the comparison between accelerator and direct detection constraints.Comment: Submitted for the SUSY07 proceedings, 15 pages, LaTex, 26 eps figure

    Disentangling Dimension Six Operators through Di-Higgs Boson Production

    Get PDF
    New physics near the TeV scale can generate dimension-six operators that modify the production rate and branching ratios of the Higgs boson. Here, we show how Higgs boson pair production can yield complementary information on dimension-six operators involving the gluon field strength. For example, the invariant mass distribution of the Higgs boson pair can show the extent to which the masses of exotic TeV-scale quarks come from electroweak symmetry breaking. We discuss both the current Tevatron bounds on these operators and the most promising LHC measurement channels for two different Higgs masses: 120 GeV and 180 GeV. We argue that the operators considered in this paper are the ones most likely to yield interesting Higgs pair physics at the LHC.Comment: 20 pages, 7 figures; v2: to match JHEP versio

    Higgs boson mass limits in perturbative unification theories

    Get PDF
    Motivated in part by recent demonstrations that electroweak unification into a simple group may occur at a low scale, we detail the requirements on the Higgs mass if the unification is to be perturbative. We do this for the Standard Model effective theory, minimal supersymmetry, and next-to-minimal supersymmetry with an additional singlet field. Within the Standard Model framework, we find that perturbative unification with sin2(thetaW)=1/4 occurs at Lambda=3.8 TeV and requires mh<460 GeV, whereas perturbative unification with sin2(thetaW)=3/8 requires mh<200 GeV. In supersymmetry, the presentation of the Higgs mass predictions can be significantly simplified, yet remain meaningful, by using a single supersymmetry breaking parameter Delta_S. We present Higgs mass limits in terms of Delta_S for the minimal supersymmetric model and the next-to-minimal supersymmetric model. We show that in next-to-minimal supersymmetry, the Higgs mass upper limit can be as large as 500 GeV even for moderate supersymmetry masses if the perturbative unification scale is low (e.g., Lambda=10 TeV).Comment: 20 pages, latex, 6 figures, references adde

    Radiative Corrections to the Higgs Boson Mass for a Hierarchical Stop Spectrum

    Full text link
    An effective theory approach is used to compute analytically the radiative corrections to the mass of the light Higgs boson of the Minimal Supersymmetric Standard Model when there is a hierarchy in the masses of the stops (M_st1 >> M_st2 >> M_top, with moderate stop mixing). The calculation includes up to two-loop leading and next-to-leading logarithmic corrections dependent on the QCD and top-Yukawa couplings, and is further completed by two-loop non-logarithmic corrections extracted from the effective potential. The results presented disagree already at two-loop-leading-log level with widely used findings of previous literature. Our formulas can be used as the starting point for a full numerical resummation of logarithmic corrections to all loops, which would be mandatory if the hierarchy between the stop masses is large.Comment: 42 pages, LaTeX, 13 figure

    SO(10) unified models and soft leptogenesis

    Full text link
    Motivated by the fact that, in some realistic models combining SO(10) GUTs and flavour symmetries, it is not possible to achieve the required baryon asymmetry through the CP asymmetry generated in the decay of right-handed neutrinos, we take a fresh look on how deep this connection is in SO(10). The common characteristics of these models are that they use the see-saw with right-handed neutrinos, predict a normal hierarchy of masses for the neutrinos observed in oscillating experiments and in the basis where the right-handed Majorana mass is diagonal, the charged lepton mixings are tiny. In addition these models link the up-quark Yukawa matrix to the neutrino Yukawa matrix Y^\nu with the special feature of Y^\nu_{11}-> 0 Using this condition, we find that the required baryon asymmetry of the Universe can be explained by the soft leptogenesis using the soft B parameter of the second lightest right-handed neutrino whose mass turns out to be around 10^8 GeV. It is pointed out that a natural way to do so is to use no-scale supergravity where the value of B ~1 GeV is set through gauge-loop corrections.Comment: 26 pages, 2 figures. Added references, new appendix of a relevant fit and improved comment

    A probabilistic compressive sensing framework with applications to ultrasound signal processing

    Get PDF
    The field of Compressive Sensing (CS) has provided algorithms to reconstruct signals from a much lower number of measurements than specified by the Nyquist-Shannon theorem. There are two fundamental concepts underpinning the field of CS. The first is the use of random transformations to project high-dimensional measurements onto a much lower-dimensional domain. The second is the use of sparse regression to reconstruct the original signal. This assumes that a sparse representation exists for this signal in some known domain, manifested by a dictionary. The original formulation for CS specifies the use of an penalised regression method, the Lasso. Whilst this has worked well in literature, it suffers from two main drawbacks. First, the level of sparsity must be specified by the user, or tuned using sub-optimal approaches. Secondly, and most importantly, the Lasso is not probabilistic; it cannot quantify uncertainty in the signal reconstruction. This paper aims to address these two issues; it presents a framework for performing compressive sensing based on sparse Bayesian learning. Specifically, the proposed framework introduces the use of the Relevance Vector Machine (RVM), an established sparse kernel regression method, as the signal reconstruction step within the standard CS methodology. This framework is developed within the context of ultrasound signal processing in mind, and so examples and results of compression and reconstruction of ultrasound pulses are presented. The dictionary learning strategy is key to the successful application of any CS framework and even more so in the probabilistic setting used here. Therefore, a detailed discussion of this step is also included in the paper. The key contributions of this paper are a framework for a Bayesian approach to compressive sensing which is computationally efficient, alongside a discussion of uncertainty quantification in CS and different strategies for dictionary learning. The methods are demonstrated on an example dataset from collected from an aerospace composite panel. Being able to quantify uncertainty on signal reconstruction reveals that this grows as the level of compression increases. This is key when deciding appropriate compression levels, or whether to trust a reconstructed signal in applications of engineering and scientific interest
    • 

    corecore