138 research outputs found

    Processor Allocation for Optimistic Parallelization of Irregular Programs

    Full text link
    Optimistic parallelization is a promising approach for the parallelization of irregular algorithms: potentially interfering tasks are launched dynamically, and the runtime system detects conflicts between concurrent activities, aborting and rolling back conflicting tasks. However, parallelism in irregular algorithms is very complex. In a regular algorithm like dense matrix multiplication, the amount of parallelism can usually be expressed as a function of the problem size, so it is reasonably straightforward to determine how many processors should be allocated to execute a regular algorithm of a certain size (this is called the processor allocation problem). In contrast, parallelism in irregular algorithms can be a function of input parameters, and the amount of parallelism can vary dramatically during the execution of the irregular algorithm. Therefore, the processor allocation problem for irregular algorithms is very difficult. In this paper, we describe the first systematic strategy for addressing this problem. Our approach is based on a construct called the conflict graph, which (i) provides insight into the amount of parallelism that can be extracted from an irregular algorithm, and (ii) can be used to address the processor allocation problem for irregular algorithms. We show that this problem is related to a generalization of the unfriendly seating problem and, by extending Tur\'an's theorem, we obtain a worst-case class of problems for optimistic parallelization, which we use to derive a lower bound on the exploitable parallelism. Finally, using some theoretically derived properties and some experimental facts, we design a quick and stable control strategy for solving the processor allocation problem heuristically.Comment: 12 pages, 3 figures, extended version of SPAA 2011 brief announcemen

    Systematics of Coupling Flows in AdS Backgrounds

    Get PDF
    We give an effective field theory derivation, based on the running of Planck brane gauge correlators, of the large logarithms that arise in the predictions for low energy gauge couplings in compactified AdS}_5 backgrounds, including the one-loop effects of bulk scalars, fermions, and gauge bosons. In contrast to the case of charged scalars coupled to Abelian gauge fields that has been considered previously in the literature, the one-loop corrections are not dominated by a single 4D Kaluza-Klein mode. Nevertheless, in the case of gauge field loops, the amplitudes can be reorganized into a leading logarithmic contribution that is identical to the running in 4D non-Abelian gauge theory, and a term which is not logarithmically enhanced and is analogous to a two-loop effect in 4D. In a warped GUT model broken by the Higgs mechanism in the bulk,we show that the matching scale that appears in the large logarithms induced by the non-Abelian gauge fields is m_{XY}^2/k where m_{XY} is the bulk mass of the XY bosons and k is the AdS curvature. This is in contrast to the UV scale in the logarithmic contributions of scalars, which is simply the bulk mass m. Our results are summarized in a set of simple rules that can be applied to compute the leading logarithmic predictions for coupling constant relations within a given warped GUT model. We present results for both bulk Higgs and boundary breaking of the GUT gauge group.Comment: 22 pages, LaTeX, 3 figures. Comments and references adde

    Kaluza-Klein supergravity on AdS_3 x S^3

    Full text link
    We construct a Chern-Simons type gauged N=8 supergravity in three spacetime dimensions with gauge group SO(4) x T_\infty over the infinite dimensional coset space SO(8,\infty)/(SO(8) x SO(\infty)), where T_\infty is an infinite dimensional translation subgroup of SO(8,\infty). This theory describes the effective interactions of the (infinitely many) supermultiplets contained in the two spin-1 Kaluza-Klein towers arising in the compactification of N=(2,0) supergravity in six dimensions on AdS_3 x S^3 with the massless supergravity multiplet. After the elimination of the gauge fields associated with T_\infty, one is left with a Yang Mills type gauged supergravity with gauge group SO(4), and in the vacuum the symmetry is broken to the (super-)isometry group of AdS_3 x S^3, with infinitely many fields acquiring masses by a variant of the Brout-Englert-Higgs effect.Comment: LaTeX2e, 24 pages; v2: references update

    Orbifolds and Flows from Gauged Supergravity

    Get PDF
    We examine orbifolds of the IIB string via gauged supergravity. For the gravity duals of the A_{n-1} quiver gauge theories, we extract the massless degrees of freedom and assemble them into multiplets of N=4 gauged supergravity in five dimensions. We examine the embedding of the gauge group into the isometry group of the scalar manifold, as well as the symmetries of the scalar potential. From this we find that there is a large SU(1,n) symmetry group which relates different RG flows in the dual quiver gauge theory. We find that this symmetry implies an extension of the usual duality between ten-dimensional IIB solutions which involves exchanging geometric moduli with background fluxes.Comment: 37 pages, harvma

    Charged AdS Black Holes and Catastrophic Holography

    Get PDF
    We compute the properties of a class of charged black holes in anti-de Sitter space-time, in diverse dimensions. These black holes are solutions of consistent Einstein-Maxwell truncations of gauged supergravities, which are shown to arise from the inclusion of rotation in the transverse space. We uncover rich thermodynamic phase structures for these systems, which display classic critical phenomena, including structures isomorphic to the van der Waals-Maxwell liquid-gas system. In that case, the phases are controlled by the universal `cusp' and `swallowtail' shapes familiar from catastrophe theory. All of the thermodynamics is consistent with field theory interpretations via holography, where the dual field theories can sometimes be found on the world volumes of coincident rotating branes.Comment: 19 pages, revtex, psfig, 6 multicomponent figures, typos, references and a few remarks have been repaired, and adde

    A Constrained Standard Model from a Compact Extra Dimension

    Full text link
    A SU(3) \times SU(2) \times U(1) supersymmetric theory is constructed with a TeV sized extra dimension compactified on the orbifold S^1/(Z_2 \times Z_2'). The compactification breaks supersymmetry leaving a set of zero modes which correspond precisely to the states of the 1 Higgs doublet standard model. Supersymmetric Yukawa interactions are localized at orbifold fixed points. The top quark hypermultiplet radiatively triggers electroweak symmetry breaking, yielding a Higgs potential which is finite and exponentially insensitive to physics above the compactification scale. This potential depends on only a single free parameter, the compactification scale, yielding a Higgs mass prediction of 127 \pm 8 GeV. The masses of the all superpartners, and the Kaluza-Klein excitations are also predicted. The lightest supersymmetric particle is a top squark of mass 197 \pm 20 GeV. The top Kaluza-Klein tower leads to the \rho parameter having quadratic sensitivity to unknown physics in the ultraviolet.Comment: 31 pages, Latex, 2 eps figures, minor correction

    Slepton and Neutralino/Chargino Coannihilations in MSSM

    Get PDF
    Within the low-energy effective Minimal Supersymmetric extension of Standard Model (effMSSM) we calculated the neutralino relic density taking into account slepton-neutralino and neutralino-chargino/neutralino coannihilation channels. We performed comparative study of these channels and obtained that both of them give sizable contributions to the reduction of the relic density. Due to these coannihilation processes some models (mostly with large neutralino masses) enter into the cosmologically interesting region for relic density, but other models leave this region. Nevertheless, in general, the predictions for direct and indirect dark matter detection rates are not strongly affected by these coannihilation channels in the effMSSM.Comment: 12 pages, 9 figures, revte

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    Supersymmetric Dark Matter and Yukawa Unification

    Get PDF
    An analysis of supersymmetric dark matter under the Yukawa unification constraint is given. The analysis utilizes the recently discovered region of the parameter space of models with gaugino mass nonuniversalities where large negative supersymmetric corrections to the b quark mass appear to allow bτb-\tau unification for a positive μ\mu sign consistent with the bs+γb\to s+\gamma and gμ2g_{\mu}-2 constraints. In the present analysis we use the revised theoretical determination of aμSMa_{\mu}^{SM} (aμ=(gμ2)/2a_{\mu}= (g_{\mu}-2)/2) in computing the difference aμexpaμSMa_{\mu}^{exp}-a_{\mu}^{SM} which takes account of a reevaluation of the light by light contribution which has a positive sign. The analysis shows that the region of the parameter space with nonuniversalities of the gaugino masses which allows for unification of Yukawa couplings also contains regions which allow satisfaction of the relic density constraint. Specifically we find that the lightest neutralino mass consistent with the relic density constraint, bτb\tau unification for SU(5) and btτb-t-\tau unification for SO(10) in addition to other constraints lies in the region below 80 GeV. An analysis of the maximum and the minimum neutralino-proton scalar cross section for the allowed parameter space including the effect of a new determination of the pion-nucleon sigma term is also given. It is found that the full parameter space for this class of models can be explored in the next generation of proposed dark matter detectors.Comment: 28 pages,nLatex including 5 fig

    Cosmological distance indicators

    Full text link
    We review three distance measurement techniques beyond the local universe: (1) gravitational lens time delays, (2) baryon acoustic oscillation (BAO), and (3) HI intensity mapping. We describe the principles and theory behind each method, the ingredients needed for measuring such distances, the current observational results, and future prospects. Time delays from strongly lensed quasars currently provide constraints on H0H_0 with < 4% uncertainty, and with 1% within reach from ongoing surveys and efforts. Recent exciting discoveries of strongly lensed supernovae hold great promise for time-delay cosmography. BAO features have been detected in redshift surveys up to z <~ 0.8 with galaxies and z ~ 2 with Ly-α\alpha forest, providing precise distance measurements and H0H_0 with < 2% uncertainty in flat Λ\LambdaCDM. Future BAO surveys will probe the distance scale with percent-level precision. HI intensity mapping has great potential to map BAO distances at z ~ 0.8 and beyond with precisions of a few percent. The next years ahead will be exciting as various cosmological probes reach 1% uncertainty in determining H0H_0, to assess the current tension in H0H_0 measurements that could indicate new physics.Comment: Review article accepted for publication in Space Science Reviews (Springer), 45 pages, 10 figures. Chapter of a special collection resulting from the May 2016 ISSI-BJ workshop on Astronomical Distance Determination in the Space Ag
    corecore