84 research outputs found

    Optimisation-based refinement of genesis indices for tropical cyclones

    Get PDF
    Tropical cyclone genesis indices are valuable tools for studying the relationship between large-scale environmental fields and the genesis of tropical cyclones, supporting the identification of future trends of cyclone genesis. However, their formulation is generally derived from simple statistical models (e.g., multiple linear regression) and are not optimised globally. In this paper, we present a simple framework for optimising genesis indexes given a user-specified trade-off between two performance metrics, which measure how well an index captures the spatial and interannual variability of tropical cyclone genesis. We apply the proposed framework to the popular Emanuel and Nolan Genesis Potential Index, yielding new, optimised formulas that correspond to different trade-offs between spatial and interannual variability. Result show that our refined indexes can improve the performance of the Emanuel and Nolan index up to 8% for spatial variability and 16%-22% for interannual variability; this improvement was found to be statistically significant (p < 0.01). Lastly, by analysing the formulas found, we give some insights into the role of the different inputs of the index in maximising one metric or the other

    Tropical Cyclone Genesis Potential Indices in a New High‐Resolution Climate Models Ensemble: Limitations and Way Forward

    Get PDF
    Genesis Potential Indices (GPIs) link the occurrence of Tropical Cyclones (TCs) to large-scale environmental conditions favorable for TC development. In the last few decades, they have been routinely used as a way to overcome the limitations of climate models (GCM), whose resolution is too coarse to produce realistic TCs. Recently, the first GCM ensemble with high enough horizontal resolution to realistically reproduce TCs was made available. Here, we address the questions of whether GPIs are still relevant in the era of TC-permitting climate model ensembles, and whether they have sufficient predictive skills. The predictive skills of GPIs are assessed against the TCs directly simulated in a climate model ensemble. We found that GPIs have poor skill in two key metrics: inter-annual variability and multi-decadal trends. We discuss possible ways to improve the understanding of the predictive skill of GPIs and therefore enhance their applicability in the era of TC-permitting GCMs

    On Fair Selection in the Presence of Implicit Variance

    Get PDF
    Quota-based fairness mechanisms like the so-called Rooney rule or four-fifths rule are used in selection problems such as hiring or college admission to reduce inequalities based on sensitive demographic attributes. These mechanisms are often viewed as introducing a trade-off between selection fairness and utility. In recent work, however, Kleinberg and Raghavan showed that, in the presence of implicit bias in estimating candidates' quality, the Rooney rule can increase the utility of the selection process. We argue that even in the absence of implicit bias, the estimates of candidates' quality from different groups may differ in another fundamental way, namely, in their variance. We term this phenomenon implicit variance and we ask: can fairness mechanisms be beneficial to the utility of a selection process in the presence of implicit variance (even in the absence of implicit bias)? To answer this question, we propose a simple model in which candidates have a true latent quality that is drawn from a group-independent normal distribution. To make the selection, a decision maker receives an unbiased estimate of the quality of each candidate, with normal noise, but whose variance depends on the candidate's group. We then compare the utility obtained by imposing a fairness mechanism that we term γ\gamma-rule (it includes demographic parity and the four-fifths rule as special cases), to that of a group-oblivious selection algorithm that picks the candidates with the highest estimated quality independently of their group. Our main result shows that the demographic parity mechanism always increases the selection utility, while any γ\gamma-rule weakly increases it. We extend our model to a two-stage selection process where the true quality is observed at the second stage. We discuss multiple extensions of our results, in particular to different distributions of the true latent quality.Comment: 27 pages, 10 figures, Economics and Computation (EC'20

    Naturalness bounds in extensions of the MSSM without a light Higgs boson

    Full text link
    Adopting a bottom-up point of view, we make a comparative study of the simplest extensions of the MSSM with extra tree level contributions to the lightest Higgs boson mass. We show to what extent a relatively heavy Higgs boson, up to 200-350 GeV, can be compatible with data and naturalness. The price to pay is that the theory undergoes some change of regime at a relatively low scale. Bounds on these models come from electroweak precision tests and naturalness, which often requires the scale at which the soft terms are generated to be relatively low.Comment: 18 pages, 5 figures. v2: minor revision, added references. v3,v4: some numerical correction

    SUSY, the Third Generation and the LHC

    Full text link
    We develop a bottom-up approach to studying SUSY with light stops and sbottoms, but with other squarks and sleptons heavy and beyond reach of the LHC. We discuss the range of squark, gaugino and Higgsino masses for which the electroweak scale is radiatively stable over the "little hierarchy" below ~ 10 TeV. We review and expand on indirect constraints on this scenario, in particular from flavor and CP tests. We emphasize that in this context, R-parity violation is very well motivated. The phenomenological differences between Majorana and Dirac gauginos are also discussed. Finally, we focus on the light subsystem of stops, sbottom and neutralino with R-parity, in order to probe the current collider bounds. We find that 1/fb LHC bounds are mild and large parts of the motivated parameter space remain open, while the 10/fb data can be much more decisive.Comment: 42 pages, 8 figures, 1 table. V2: minor corrections, references adde

    Detecting the Higgs boson(s) in LambdaSUSY

    Full text link
    We reconsider the Higgs bosons discovery potential in the LambdaSUSY framework, in which the masses of the scalar particles are increased already at tree level via a largish supersymmetric coupling between the usual Higgs doublets and a Singlet. We analyze in particular the interplay between the discovery potential of the lightest and of the next-to-lightest scalar, finding that the decay modes of the latter should be more easily detected at the LHC.Comment: 9 pages, 2 figure

    A natural little hierarchy for RS from accidental SUSY

    Full text link
    We use supersymmetry to address the little hierarchy problem in Randall-Sundrum models by naturally generating a hierarchy between the IR scale and the electroweak scale. Supersymmetry is broken on the UV brane which triggers the stabilization of the warped extra dimension at an IR scale of order 10 TeV. The Higgs and top quark live near the IR brane whereas light fermion generations are localized towards the UV brane. Supersymmetry breaking causes the first two sparticle generations to decouple, thereby avoiding the supersymmetric flavour and CP problems, while an accidental R-symmetry protects the gaugino mass. The resulting low-energy sparticle spectrum consists of stops, gauginos and Higgsinos which are sufficient to stabilize the little hierarchy between the IR scale and the electroweak scale. Finally, the supersymmetric little hierarchy problem is ameliorated by introducing a singlet Higgs field on the IR brane.Comment: 37 pages, 3 figures; v2: minor corrections, version published in JHE
    corecore