2,074 research outputs found

    What next for the CMSSM and the NUHM: Improved prospects for superpartner and dark matter detection

    Get PDF
    We present an updated analysis of the CMSSM and the NUHM using the latest experimental data and numerical tools. We map out favored regions of Bayesian posterior probability in light of data from the LHC, flavor observables, the relic density and dark matter searches. We present some updated features with respect to our previous analyses: we include the effects of corrections to the light Higgs mass beyond the 2-loop order using FeynHiggs v2.10.0; we include in the likelihood the latest limits from direct searches for squarks and gluinos at ATLAS with ~20/fb; the latest constraints on the spin-independent scattering cross section of the neutralino from LUX are applied taking into account uncertainties in the nuclear form factors. We find that in the CMSSM the posterior distribution now tends to favor smaller values of Msusy than in the previous analyses. As a consequence, the statistical weight of the A-resonance region increases to about 30% of the total probability, with interesting new prospects for the 14 TeV run at the LHC. The most favored region, on the other hand, still features multi-TeV squarks and gluinos, and ~1TeV higgsino dark matter whose detection prospects by current and one-tonne detectors look very promising. The same region is predominant in the NUHM, although the A-resonance region is also present there as well as a new solution, of neutralino-stau coannihilation through the channel stau stau -> hh at very large \mu. We derive the expected sensitivity of the future CTA experiment to ~1 TeV higgsino dark matter for both models and show that the prospects for probing both models are realistically good. We comment on the complementarity of this search to planned direct detection one-tonne experiments.Comment: 37 pages, 15 figures. Appendix added showing the future constraints on the CMSSM, including an updated calculation of the sensitivity of CTA presented in arXiv:1411.521

    Cumulant mapping as the basis of multi-dimensional spectrometry

    Full text link
    Cumulant mapping employs a statistical reconstruction of the whole by sampling its parts. The theory developed in this work formalises and extends ad hoc methods of `multi-fold' or `multi-dimensional' covariance mapping. Explicit formulae have been derived for the expected values of up to the 6th cumulant and the variance has been calculated for up to the 4th cumulant. A method of extending these formulae to higher cumulants has been described. The formulae take into account reduced fragment detection efficiency and a background from uncorrelated events. Number of samples needed for suppressing the statistical noise to a required level can be estimated using a Matlab code included in Supplemental Material. The theory can be used to assess the experimental feasibility of studying molecular fragmentations induced by femtosecond or x-ray free-electron lasers. It is also relevant for extending the conventional mass spectrometry of biomolecules to multiple dimensions.Comment: 13 pages + Popular summary, 6 figure

    Blind Spots for Direct Detection with Simplified DM Models and the LHC

    Full text link
    Using the existing simplified model framework, we build several dark matter models which have suppressed spin-independent scattering cross section. We show that the scattering cross section can vanish due to interference effects with models obtained by simple combinations of simplified models. For weakly interacting massive particle (WIMP) masses ≳\gtrsim10 GeV, collider limits are usually much weaker than the direct detection limits coming from LUX or XENON100. However, for our model combinations, LHC analyses are more competitive for some parts of the parameter space. The regions with direct detection blind spots can be strongly constrained from the complementary use of several Large Hadron Collider (LHC) searches like mono-jet, jets + missing transverse energy, heavy vector resonance searches, etc. We evaluate the strongest limits for combinations of scalar + vector, "squark" + vector, and scalar + "squark" mediator, and present the LHC 14 TeV projections.Comment: 9 Pages, Talk presented at the conference "Varying Constants and Fundamental Cosmology - VARCOSMOFUN'16" (Szczecin, Poland), Published in Universe (proceedings of VARCOSMOFUN'16

    Bayesian Implications of Current LHC Supersymmetry and Dark Matter Detection Searches for the Constrained MSSM

    Full text link
    We investigate the impact of recent limits from LHC searches for supersymmetry and from direct and indirect searches for dark matter on global Bayesian inferences of the parameter space of the Constrained Minimal Supersymmetric Standard Model (CMSSM). In particular we apply recent exclusion limits from the CMS \alpha_T analysis of 1.1/fb of integrated luminosity, the current direct detection dark matter limit from XENON100, as well as recent experimental constraints on \gamma-ray fluxes from dwarf spheroidal satellite galaxies of the Milky Way from the FermiLAT telescope, in addition to updating values for other non-LHC experimental constraints. We extend the range of scanned parameters to include a significant fraction of the focus point/hyperbolic branch region. While we confirm earlier conclusions that at present LHC limits provide the strongest constraints on the model's parameters, we also find that when the uncertainties are not treated in an excessively conservative way, the new bounds from dwarf spheroidal have the power to significantly constrain the focus point/hyperbolic branch region. Their effect is then comparable, if not stronger, to that from XENON100. We further analyze the effects of one-year projected sensitivities on the neutrino flux from the Sun in the 86-string IceCube+DeepCore configuration at the South Pole. We show that data on neutrinos from the Sun, expected for the next few months at IceCube and DeepCore, have the potential to further constrain the same region of parameter space independently of the LHC and can yield additional investigating power for the model.Comment: 27 pages, 7 figures, version published in PR

    Fuzzy sets predict flexural strength and density of silicon nitride ceramics

    Get PDF
    In this work, we utilize fuzzy sets theory to evaluate and make predictions of flexural strength and density of NASA 6Y silicon nitride ceramic. Processing variables of milling time, sintering time, and sintering nitrogen pressure are used as an input to the fuzzy system. Flexural strength and density are the output parameters of the system. Data from 273 Si3N4 modulus of rupture bars tested at room temperature and 135 bars tested at 1370 C are used in this study. Generalized mean operator and Hamming distance are utilized to build the fuzzy predictive model. The maximum test error for density does not exceed 3.3 percent, and for flexural strength 7.1 percent, as compared with the errors of 1.72 percent and 11.34 percent obtained by using neural networks, respectively. These results demonstrate that fuzzy sets theory can be incorporated into the process of designing materials, such as ceramics, especially for assessing more complex relationships between the processing variables and parameters, like strength, which are governed by randomness of manufacturing processes

    Ge-substitutional defects and the r3xr3 <--> 3x3 transition in alpha--SnGe(111)

    Full text link
    The structure and energetics of Ge substitutional defects on the alpha-Sn/Ge(111) surface are analyzed using Density Functional Theory (DFT) molecular dynamics (MD) simulations. An isolated Ge defect induces a very local distortion of the 3x3 reconstruction, confined to a significant downwards displacement (-0.31 A) at the defect site and a modest upward displacement (0.05 A) of the three Sn nearest neighbours with partially occupied dangling bonds. Dynamical fluctuations between the two degenerate ground states yield the six-fold symmetry observed around a defect in the experiments at room temperature. Defect-defect interactions are controlled by the energetics of the deformation of the 3x3 structure: They are negligible for defects on the honeycomb lattice and quite large for a third defect on the hexagonal lattice, explaining the low temperature defect ordering.Comment: 4 pages, Revtex, 7 Encapsulated Postscript figures, uses epsf.sty. Submitted to Phys. Rev. Let

    Error representation of the time-marching DPG scheme

    Get PDF
    In this article, we introduce an error representation function to perform adaptivity in time of the recently developed time-marching Discontinuous Petrov–Galerkin (DPG) scheme. We first provide an analytical expression for the error that is the Riesz representation of the residual. Then, we approximate the error by enriching the test space in such a way that it contains the optimal test functions. The local error contributions can be efficiently computed by adding a few equations to the time-marching scheme. We analyze the quality of such approximation by constructing a Fortin operator and providing an a posteriori error estimate. The time-marching scheme proposed in this article provides an optimal solution along with a set of efficient and reliable local error contributions to perform adaptivity. We validate our method for both parabolic and hyperbolic problems

    Ground state energy of the modified Nambu-Goto string

    Get PDF
    We calculate, using zeta function regularization method, semiclassical energy of the Nambu-Goto string supplemented with the boundary, Gauss-Bonnet term in the action and discuss the tachyonic ground state problem.Comment: 10 pages, LaTeX, 2 figure

    Prospects for dark matter searches in the pMSSM

    Full text link
    We investigate the prospects for detection of neutralino dark matter in the 19-parameter phenomenological MSSM (pMSSM). We explore very wide ranges of the pMSSM parameters but pay particular attention to the higgsino-like neutralino at the ~ 1 TeV scale, which has been shown to be a well motivated solution in many constrained supersymmetric models, as well as to a wino-dominated solution with the mass in the range of 2-3 TeV. After summarising the present bounds on the parameter space from direct and indirect detection experiments, we focus on prospects for detection of the Cherenkov Telescope Array (CTA). To this end, we derive a realistic assessment of the sensitivity of CTA to photon fluxes from dark matter annihilation by means of a binned likelihood analysis for the Einasto and Navarro-Frenk-White halo profiles. We use the most up to date instrument response functions and background simulation model provided by the CTA Collaboration. We find that, with 500 hours of observation, under the Einasto profile CTA is bound to exclude at the 95% C.L. almost all of the ~ 1 TeV higgsino region of the pMSSM, effectively closing the window for heavy supersymmetric dark matter in many realistic models. CTA will be able to probe the vast majority of cases corresponding to a spin-independent scattering cross section below the reach of 1-tonne underground detector searches for dark matter, in fact even well below the irreducible neutrino background for direct detection. On the other hand, many points lying beyond the sensitivity of CTA will be within the reach of 1-tonne detectors, and some within collider reach. Altogether, CTA will provide a highly sensitive way of searching for dark matter that will be partially overlapping and partially complementary with 1-tonne detector and collider searches, thus being instrumental to effectively explore the nearly full parameter space of the pMSSM.Comment: 35 pages, 14 figures, minor corrections and citations added, version to appear in JHE
    • …
    corecore