481 research outputs found
Light Higgsino in Heavy Gravitino Scenario with Successful Electroweak Symmetry Breaking
We consider, in the context of the minimal supersymmetric standard model, the
case where the gravitino weighs 10^6 GeV or more, which is preferred by various
cosmological difficulties associated with unstable gravitinos. Despite the
large Higgs mixing parameter B together with the little hierarchy to other soft
supersymmetry breaking masses, a light higgsino with an electroweak scale mass
leads to successful electroweak symmetry breaking, at the price of fine-tuning
the higgsino mixing mu parameter. Furthermore the light higgsinos produced at
the decays of gravitinos can constitute the dark matter of the universe. The
heavy squark mass spectrum of O(10^4) GeV can increase the Higgs boson mass to
about 125 GeV or higher.Comment: 13 pages, 3 figures; v2: version to appear in JHE
Absolutely stable proton and lowering the gauge unification scale
A unified model is constructed, based on flipped SU(5) in which the proton is absolutely stable. The model requires the existence of new leptons with masses of order the weak scale. The possibility that the unification scale could be extremely low is discussed
Testing the Nambu-Goldstone Hypothesis for Quarks and Leptons at the LHC
The hierarchy of the Yukawa couplings is an outstanding problem of the
standard model. We present a class of models in which the first and second
generation fermions are SUSY partners of pseudo-Nambu-Goldstone bosons that
parameterize a non-compact Kahler manifold, explaining the small values of
these fermion masses relative to those of the third generation. We also provide
an example of such a model. We find that various regions of the parameter space
in this scenario can give the correct dark matter abundance, and that nearly
all of these regions evade other phenomenological constraints. We show that for
gluino mass ~700 GeV, model points from these regions can be easily
distinguished from other mSUGRA points at the LHC with only 7 fb^(-1) of
integrated luminosity at 14 TeV. The most striking signatures are a dearth of
b- and tau-jets, a great number of multi-lepton events, and either an
"inverted" slepton mass hierarchy, narrowed slepton mass hierarchy, or
characteristic small-mu spectrum.Comment: Corresponds to published versio
Long-lived stops in MSSM scenarios with a neutralino LSP
This work investigates the possibility of a long-lived stop squark in
supersymmetric models with the neutralino as the lightest supersymmetric
particle (LSP). We study the implications of meta-stable stops on the sparticle
mass spectra and the dark matter density. We find that in order to obtain a
sufficiently long stop lifetime so as to be observable as a stable R-hadron at
an LHC experiment, we need to fine tune the mass degeneracy between the stop
and the LSP considerably. This increases the stop-neutralino coanihilation
cross section, leaving the neutralino relic density lower than what is expected
from the WMAP results for stop masses ~1.5 TeV/c^2. However, if such scenarios
are realised in nature we demonstrate that the long-lived stops will be
produced at the LHC and that stop-based R-hadrons with masses up to 1 TeV/c^2
can be detected after one year of running at design luminosity
Recommended from our members
Source description and sampling techniques in PEREGRINE Monte Carlo calculations of dose distributions for radiation oncology
We outline the techniques used within PEREGRINE, a 3D Monte Carlo code calculation system, to model the photon output from medical accelerators. We discuss the methods used to reduce the phase-space data to a form that is accurately and efficiently sampled
A Profile Likelihood Analysis of the Constrained MSSM with Genetic Algorithms
The Constrained Minimal Supersymmetric Standard Model (CMSSM) is one of the
simplest and most widely-studied supersymmetric extensions to the standard
model of particle physics. Nevertheless, current data do not sufficiently
constrain the model parameters in a way completely independent of priors,
statistical measures and scanning techniques. We present a new technique for
scanning supersymmetric parameter spaces, optimised for frequentist profile
likelihood analyses and based on Genetic Algorithms. We apply this technique to
the CMSSM, taking into account existing collider and cosmological data in our
global fit. We compare our method to the MultiNest algorithm, an efficient
Bayesian technique, paying particular attention to the best-fit points and
implications for particle masses at the LHC and dark matter searches. Our
global best-fit point lies in the focus point region. We find many
high-likelihood points in both the stau co-annihilation and focus point
regions, including a previously neglected section of the co-annihilation region
at large m_0. We show that there are many high-likelihood points in the CMSSM
parameter space commonly missed by existing scanning techniques, especially at
high masses. This has a significant influence on the derived confidence regions
for parameters and observables, and can dramatically change the entire
statistical inference of such scans.Comment: 47 pages, 8 figures; Fig. 8, Table 7 and more discussions added to
Sec. 3.4.2 in response to referee's comments; accepted for publication in
JHE
Recommended from our members
Treatment of patient-dependent beam modifiers in photon treatments by the Monte Carlo dose calculation code PEREGRINE
The goal of the PEREGRINE Monte Carlo Dose Calculation Project is to deliver a Monte Carlo package that is both accurate and sufficiently fast for routine clinical use. One of the operational requirements for photon-treatment plans is a fast, accurate method of describing the photon phase-space distribution at the surface of the patient. The open-field case is computationally the most tractable; we know, a priori, for a given machine and energy, the locations and compositions of the relevant accelerator components (i.e., target, primary collimator, flattening filter, and monitor chamber). Therefore, we can precalculate and store the expected photon distributions. For any open-field treatment plan, we then evaluate these existing photon phase-space distributions at the patient`s surface, and pass the obtained photons to the dose calculation routines within PEREGRINE. We neglect any effect of the intervening air column, including attenuation of the photons and production of contaminant electrons. In principle, for treatment plans requiring jaws, blocks, and wedges, we could precalculate and store photon phase-space distributions for various combinations of field sizes and wedges. This has the disadvantage that we would have to anticipate those combinations and that subsequently PEREGRINE would not be able to treat other plans. Therefore, PEREGRINE tracks photons through the patient-dependent beam modifiers. The geometric and physics methods used to do this are described here. 4 refs., 8 figs
Models of quintessence coupled to the electromagnetic field and the cosmological evolution of alpha
We study the change of the effective fine structure constant in the
cosmological models of a scalar field with a non-vanishing coupling to the
electromagnetic field. Combining cosmological data and terrestrial observations
we place empirical constraints on the size of the possible coupling and explore
a large class of models that exhibit tracking behavior. The change of the fine
structure constant implied by the quasar absorption spectra together with the
requirement of tracking behavior impose a lower bound of the size of this
coupling. Furthermore, the transition to the quintessence regime implies a
narrow window for this coupling around in units of the inverse Planck
mass. We also propose a non-minimal coupling between electromagnetism and
quintessence which has the effect of leading only to changes of alpha
determined from atomic physics phenomena, but leaving no observable
consequences through nuclear physics effects. In doing so we are able to
reconcile the claimed cosmological evidence for a changing fine structure
constant with the tight constraints emerging from the Oklo natural nuclear
reactor.Comment: 13 pages, 10 figures, RevTex, new references adde
Implications of a Modified Higgs to Diphoton Decay Width
Motivated by recent results from Higgs searches at the Large Hadron Collider,
we consider possibilities to enhance the diphoton decay width of the Higgs
boson over the Standard Model expectation, without modifying either its
production rate or the partial widths in the WW and ZZ channels. Studying
effects of new charged scalars, fermions and vector bosons, we find that
significant variations in the diphoton width may be possible if the new
particles have light masses of the order of a few hundred GeV and sizeable
couplings to the Higgs boson. Such couplings could arise naturally if there is
large mass mixing between two charged particles that is induced by the Higgs
vacuum expectation value. In addition, there is generically also a shift in the
Z + Gamma partial width, which in the case of new vector bosons tends to be of
similar magnitude as the shift in the diphoton partial width, but smaller in
other cases. Therefore simultaneous measurements in these two channels could
reveal properties of new charged particles at the electroweak scale.Comment: 29 pages, 8 figures; v2: updated references and minor improvements in
presentations; v3: sign of the scalar contribution to Z+Gamma amplitudes
fixed. Related figures update
Dilaton dominance relaxes LHC and cosmological constraints in supersymmetric models
It has been pointed out recently that the presence of dilaton field in the
early Universe can dilute the neutralino dark matter (DM) abundance, if
Universe is not radiation dominated at DM decoupling, due to its
dissipative-like coupling to DM. In this scenario two basic mechanisms compete,
the modified Hubble expansion rate tending to increase the relic density and a
dissipative force that tends to decrease it. The net effect can lead to an
overall dramatic decrease of the predicted relic abundance, sometimes by
amounts of the order of O(10^2) or so. This feature is rather generic,
independent of any particular assumption on the underlying string dynamics,
provided dilaton dominates at early eras after the end of inflation but before
Big Bang Nucleosynthesis (BBN). The latter ensures that BBN is not upset by the
presence of the dilaton. In this paper, within the context of such a scenario,
we study the phenomenology of the constrained minimal supersymmetric model
(CMSSM) by taking into account all recent experimental constraints, including
those from the LHC searches. We find that the allowed parameter space is
greatly enlarged and includes regions that are beyond the reach of LHC. The
allowed regions are compatible with Direct Dark Matter searches since the small
neutralino annihilation rates, that are now in accord with the cosmological
data on the relic density, imply small neutralino-nucleon cross sections below
the sensitivities of the Direct Dark Matter experiments. It is also important
that the new cosmologically accepted regions are compatible with Higgs boson
masses larger than 120 GeV, as it is indicated from the LHC experimental data.
The smaller annihilation cross sections needed to explain WMAP data require
that the detector performances of current and planned indirect DM search
experiments through gamma rays should be greatly improved in order to probe the
CMSSM regions.Comment: 20 pages, 10 eps figures. Revised and extended version to appear in
JHEP; a section on gamma rays adde
- …