204 research outputs found

    The Fraternal WIMP Miracle

    Full text link
    We identify and analyze thermal dark matter candidates in the fraternal twin Higgs model and its generalizations. The relic abundance of fraternal twin dark matter is set by twin weak interactions, with a scale tightly tied to the weak scale of the Standard Model by naturalness considerations. As such, the dark matter candidates benefit from a "fraternal WIMP miracle," reproducing the observed dark matter abundance for dark matter masses between 50 and 150 GeV. However, the couplings dominantly responsible for dark matter annihilation do not lead to interactions with the visible sector. The direct detection rate is instead set via fermionic Higgs portal interactions, which are likewise constrained by naturalness considerations but parametrically weaker than those leading to dark matter annihilation. The predicted direct detection cross section is close to current LUX bounds and presents an opportunity for the next generation of direct detection experiments.Comment: 22 pages, 6 figures. v2: Relic abundance calculations revised and improved, citations added. Conclusions largely unchanged. v3: Minor changes, accepted by JCA

    Naturalness in the Dark at the LHC

    Full text link
    We revisit the Twin Higgs scenario as a "dark" solution to the little hierarchy problem, identify the structure of a minimal model and its viable parameter space, and analyze its collider implications. In this model, dark naturalness generally leads to Hidden Valley phenomenology. The twin particles, including the top partner, are all Standard-Model-neutral, but naturalness favors the existence of twin strong interactions -- an asymptotically-free force that confines not far above the Standard Model QCD scale -- and a Higgs portal interaction. We show that, taken together, these typically give rise to exotic decays of the Higgs to twin hadrons. Across a substantial portion of the parameter space, certain twin hadrons have visible and often displaced decays, providing a potentially striking LHC signature. We briefly discuss appropriate experimental search strategies.Comment: 64 pages, 10 figures; v2: minor changes, references adde

    Geometry considerations for high-order finite-volume methods on structured grids with adaptive mesh refinement

    Get PDF
    2022 Summer.Includes bibliographical references.Computational fluid dynamics (CFD) is an invaluable tool for engineering design. Meshing complex geometries with accuracy and efficiency is vital to a CFD simulation. In particular, using structured grids with adaptive mesh refinement (AMR) will be invaluable to engineering optimization where automation is critical. For high-order (fourth-order and above) finite volume methods (FVMs), discrete representation of complex geometries adds extra challenges. High-order methods are not trivially extended to complex geometries of engineering interest. To accommodate geometric complexity with structured AMR in the context of high-order FVMs, this work aims to develop three new methods. First, a robust method is developed for bounding high-order interpolations between grid levels when using AMR. High-order interpolation is prone to numerical oscillations which can result in unphysical solutions. To overcome this, localized interpolation bounds are enforced while maintaining solution conservation. This method provides great flexibility in how refinement may be used in engineering applications. Second, a mapped multi-block technique is developed, capable of representing moderately complex geometries with structured grids. This method works with high-order FVMs while still enabling AMR and retaining strict solution conservation. This method interfaces with well-established engineering work flows for grid generation and interpolates generalized curvilinear coordinate transformations for each block. Solutions between blocks are then communicated by a generalized interpolation strategy while maintaining a single-valued flux. Finally, an embedded-boundary technique is developed for high-order FVMs. This method is particularly attractive since it automates mesh generation of any complex geometry. However, the algorithms on the resulting meshes require extra attention to achieve both stable and accurate results near boundaries. This is achieved by performing solution reconstructions using a weighted form of high-order interpolation that accounts for boundary geometry. These methods are verified, validated, and tested by complex configurations such as reacting flows in a bluff-body combustor and Stokes flows with complicated geometries. Results demonstrate the new algorithms are effective for solving complex geometries at high-order accuracy with AMR. This study contributes to advance the geometric capability in CFD for efficient and effective engineering applications

    Throwing Out the Baby With the Bath Water: A Comment on Green, Kim and Yoon

    Get PDF
    Donald P. Green, Soo Yeon Kim, and David H. Yoon contribute to the literature on estimating pooled times-series cross-section models in international relations (IR). They argue that such models should be estimated with fixed effects when such effects are statistically necessary. While we obviously have no disagreement that sometimes fixed effects are appropriate, we show here that they are pernicious for IR time-series cross-section models with a binary dependent variable and that they are often problematic for IR models with a continuous dependent variable. In the binary case, this perniciousness is the result of many pairs of nations always being scored zero and hence having no impact on the parameter estimates; for example, many dyads never come into conflict. In the continuous case, fixed effects are problematic in the presence of the temporally stable regressors that are common IR applications, such as the dyadic democracy measures used by Green, Kim, and Yoon

    Random Coefficient Models for Time-Series–Cross-Section Data

    Get PDF
    This paper considers random coefficient models (RCMs) for time-series–cross-section data. These models allow for unit to unit variation in the model parameters. After laying out the various models, we assess several issues in specifying RCMs. We then consider the finite sample properties of some standard RCM estimators, and show that the most common one, associated with Hsiao, has very poor properties. These analyses also show that a somewhat awkward combination of estimators based on Swamy’s work performs reasonably well; this awkward estimator and a Bayes estimator with an uninformative prior (due to Smith) seem to perform best. But we also see that estimators which assume full pooling perform well unless there is a large degree of unit to unit parameter heterogeneity. We also argue that the various data driven methods (whether classical or empirical Bayes or Bayes with gentle priors) tends to lead to much more heterogeneity than most political scientists would like. We speculate that fully Bayesian models, with a variety of informative priors, may be the best way to approach RCMs

    Comment on 'What To Do (and Not To Do) with Times-Series-Cross-Section Data'

    Get PDF
    Much as we would like to believe that the high citation count for this article is due to the brilliance and clarity of our argument, it is more likely that the count is due to our being in the right place (that is, the right part of the discipline) at the right time. In the 1960s and 1970s, serious quantitative analysis was used primarily in the study of American politics. But since the 1980s it has spread to the study of both comparative politics and international relations. In comparative politics we see in the 20 most cited Review articles Hibbs’s (1977) and Cameron’s (1978) quantitative analyses of the political economy of advanced industrial societies; in international relations we see Maoz and Russett’s (1993) analysis of the democratic peace; and these studies have been followed by myriad others. Our article contributed to the methodology for analyzing what has become the principal type of data used in the study of comparative politics; a related article (Beck, Katz, and Tucker 1998), which has also had a good citation history, dealt with analyzing this type of data with a binary dependent variable, data heavily used in conflict studies similar to that of Maoz and Russett’s. Thus the citations to our methodological discussions reflect the huge amount of work now being done in the quantitative analysis of both comparative politics and international relations

    Random Coefficient Models for Time-Series-Cross-Section Data: Monte Carlo Experiments

    Get PDF
    This article considers random coefficient models (RCMs) for time-series–cross-section data. These models allow for unit to unit variation in the model parameters. The heart of the article compares the finite sample properties of the fully pooled estimator, the unit by unit (unpooled) estimator, and the (maximum likelihood) RCM estimator. The maximum likelihood estimator RCM performs well, even where the data were generated so that the RCM would be problematic. In an appendix, we show that the most common feasible generalized least squares estimator of the RCM models is always inferior to the maximum likelihood estimator, and in smaller samples dramatically so

    Throwing Out the Baby with the Bath Water: A Comment on Green, Yoon and Kim

    Get PDF
    [No abstract

    Modeling Dynamics in Time-Series–Cross-Section Political Economy Data

    Get PDF
    This paper deals with a variety of dynamic issues in the analysis of time- series–cross-section (TSCS) data. While the issues raised are more general, we focus on applications to political economy. We begin with a discussion of specification and lay out the theoretical differences implied by the various types of time series models that can be estimated. It is shown that there is nothing pernicious in using a lagged dependent variable and that all dynamic models either implicitly or explicitly have such a variable; the differences between the models relate to assumptions about the speeds of adjustment of measured and unmeasured variables. When adjustment is quick it is hard to differentiate between the various models; with slower speeds of adjustment the various models make sufficiently different predictions that they can be tested against each other. As the speed of adjustment gets slower and slower, specification (and estimation) gets more and more tricky. We then turn to a discussion of estimation. It is noted that models with both a lagged dependent variable and serially correlated errors can easily be estimated; it is only OLS that is inconsistent in this situation. We then show, via Monte Carlo analysis shows that for typical TSCS data that fixed effects with a lagged dependent variable performs about as well as the much more complicated Kiviet estimator, and better than the Anderson-Hsiao estimator (both designed for panels)
    • …
    corecore