32,602 research outputs found

    A LES-Langevin model for turbulence

    Full text link
    We propose a new model of turbulence for use in large-eddy simulations (LES). The turbulent force, represented here by the turbulent Lamb vector, is divided in two contributions. The contribution including only subfilter fields is deterministically modeled through a classical eddy-viscosity. The other contribution including both filtered and subfilter scales is dynamically computed as solution of a generalized (stochastic) Langevin equation. This equation is derived using Rapid Distortion Theory (RDT) applied to the subfilter scales. The general friction operator therefore includes both advection and stretching by the resolved scale. The stochastic noise is derived as the sum of a contribution from the energy cascade and a contribution from the pressure. The LES model is thus made of an equation for the resolved scale, including the turbulent force, and a generalized Langevin equation integrated on a twice-finer grid. The model is validated by comparison to DNS and is tested against classical LES models for isotropic homogeneous turbulence, based on eddy viscosity. We show that even in this situation, where no walls are present, our inclusion of backscatter through the Langevin equation results in a better description of the flow.Comment: 18 pages, 14 figures, to appear in Eur. Phys. J.

    Heavy Higgs Bosons at Low tanβ\tan \beta: from the LHC to 100 TeV

    Get PDF
    We present strategies to search for heavy neutral Higgs bosons decaying to top quark pairs, as often occurs at low tanβ\tan \beta in type II two Higgs doublet models such as the Higgs sector of the MSSM. The resonant production channel is unsatisfactory due to interference with the SM background. We instead propose to utilize same-sign dilepton signatures arising from the production of heavy Higgs bosons in association with one or two top quarks and subsequent decay to a top pair. We find that for heavier neutral Higgs bosons the production in association with one top quark provides greater sensitivity than production in association with two top quarks. We obtain current limits at the LHC using Run I data at 8 TeV and forecast the sensitivity of a dedicated analysis during Run II at 14 TeV. Then we perform a detailed BDT study for the 14 TeV LHC and a future 100 TeV collider.Comment: published version, 22 pages, 15 figures, 3 table

    Detecting a Boosted Diboson Resonance

    Full text link
    New light scalar particles in the mass range of hundreds of GeV, decaying into a pair of W/ZW/Z bosons can appear in several extensions of the SM. The focus of collider studies for such a scalar is often on its direct production, where the scalar is typically only mildly boosted. The observed W/ZW/Z are therefore well-separated, allowing analyses for the scalar resonance in a standard fashion as a low-mass diboson resonance. In this work we instead focus on the scenario where the direct production of the scalar is suppressed, and it is rather produced via the decay of a significantly heavier (a few TeV mass) new particle, in conjunction with SM particles. Such a process results in the scalar being highly boosted, rendering the W/ZW/Z's from its decay merged. The final state in such a decay is a "fat" jet, which can be either four-pronged (for fully hadronic W/ZW/Z decays), or may be like a W/ZW/Z jet, but with leptons buried inside (if one of the W/ZW/Z decays leptonically). In addition, this fat jet has a jet mass that can be quite different from that of the W/ZW/Z/Higgs/top quark-induced jet, and may be missed by existing searches. In this work, we develop dedicated algorithms for tagging such multi-layered "boosted dibosons" at the LHC. As a concrete application, we discuss an extension of the standard warped extra-dimensional framework where such a light scalar can arise. We demonstrate that the use of these algorithms gives sensitivity in mass ranges that are otherwise poorly constrained.Comment: 33 pages, 13 figure

    Fast Numerical simulations of 2D turbulence using a dynamic model for Subgrid Motions

    Full text link
    We present numerical simulation of 2D turbulent flow using a new model for the subgrid scales which are computed using a dynamic equation linking the subgrid scales with the resolved velocity. This equation is not postulated, but derived from the constitutive equations under the assumption that the non-linear interactions of subgrid scales between themselves are equivalent to a turbulent viscosity.The performances of our model are compared with Direct Numerical Simulations of decaying and forced turbulence. For a same resolution, numerical simulations using our model allow for a significant reduction of the computational time (of the order of 100 in the case we consider), and allow the achievement of significantly larger Reynolds number than the direct method.Comment: 35 pages, 9 figure

    Interacting errors in large-eddy simulation: a review of recent developments

    Get PDF
    The accuracy of large-eddy simulations is limited, among others, by the quality of the subgrid parameterisation and the numerical contamination of the smaller retained flow structures. We review the effects of discretisation and modelling errors from two different perspectives. We first show that spatial discretisation induces its own filter and compare the dynamic importance of this numerical filter to the basic large-eddy filter. The spatial discretisation modifies the large-eddy closure problem as is expressed by the difference between the discrete 'numerical stress tensor' and the continuous 'turbulent stress tensor'. This difference consists of a high-pass contribution associated with the specific numerical filter. Several central differencing methods are analysed and the importance of the subgrid resolution is established. Second, we review a database approach to assess the total simulation error and its numerical and modelling contributions. The interaction between the different sources of error is shown to lead to their partial cancellation. From this analysis one may identify an 'optimal refinement strategy' for a given subgrid model, discretisation method and flow conditions, leading to minimal total simulation error at a given computational cost. We provide full detail for homogeneous decaying turbulence in a 'Smagorinsky fluid'. The optimal refinement strategy is compared with the error reduction that arises from grid refinement of the dynamic eddy-viscosity model. The main trends of the optimal refinement strategy as a function of resolution and Reynolds number are found to be adequately followed by the dynamic model. This yields significant error reduction upon grid refinement although at coarse resolutions significant error levels remain. To address this deficiency, a new successive inverse polynomial interpolation procedure is proposed with which the optimal Smagorinsky constant may be efficiently approximated at a given resolution. The computational overhead of this optimisation procedure is shown to be well justified in view of the achieved reduction of the error level relative to the 'no-model' and dynamic model predictions

    Collider phenomenology of Hidden Valley mediators of spin 0 or 1/2 with semivisible jets

    Full text link
    Many models of Beyond the Standard Model physics contain particles that are charged under both Standard Model and Hidden Valley gauge groups, yet very little effort has been put into establishing their experimental signatures. We provide a general overview of the collider phenomenology of spin 0 or 1/2 mediators with non-trivial gauge numbers under both the Standard Model and a single new confining group. Due to the possibility of many unconventional signatures, the focus is on direct production with semivisible jets. For the mediators to be able to decay, a global U(1)U(1) symmetry must be broken. This is best done by introducing a set of operators explicitly violating this symmetry. We find that there is only a finite number of such renormalizable operators and that the phenomenology can be classified into five distinct categories. We show that large regions of the parameter space are already excluded, while others are unconstrained by current search strategies. We also discuss how searches could be modified to better probe these unconstrained regions by exploiting special properties of semivisible jets.Comment: 40 pages, 11 figures, published versio
    corecore