62,467 research outputs found
The Fate of Long-Lived Superparticles with Hadronic Decays after LHC Run 1
Supersymmetry searches at the LHC are both highly varied and highly
constraining, but the vast majority are focused on cases where the final-stage
visible decays are prompt. Scenarios featuring superparticles with
detector-scale lifetimes have therefore remained a tantalizing possibility for
sub-TeV SUSY, since explicit limits are relatively sparse. Nonetheless, the
extremely low backgrounds of the few existing searches for collider-stable and
displaced new particles facilitates recastings into powerful long-lived
superparticle searches, even for models for which those searches are highly
non-optimized. In this paper, we assess the status of such models in the
context of baryonic R-parity violation, gauge mediation, and mini-split SUSY.
We explore a number of common simplified spectra where hadronic decays can be
important, employing recasts of LHC searches that utilize different detector
systems and final-state objects. The LSP/NLSP possibilities considered here
include generic colored superparticles such as the gluino and light-flavor
squarks, as well as the lighter stop and the quasi-degenerate Higgsino
multiplet motivated by naturalness. We find that complementary coverage over
large swaths of mass and lifetime is achievable by superimposing limits,
particularly from CMS's tracker-based displaced dijet search and heavy stable
charged particle searches. Adding in prompt searches, we find many cases where
a range of sparticle masses is now excluded from zero lifetime to infinite
lifetime with no gaps. In other cases, the displaced searches furnish the only
extant limits at any lifetime.Comment: 36 pages, 10 figures, plus appendix and reference
Potential precision of a direct measurement of the Higgs boson total width at a muon colliderr
In the light of the discovery of a 126 GeV Standard-Model-like Higgs boson at
the LHC, we evaluate the achievable accuracies for direct measurements of the
width, mass, and the s-channel resonant production cross section of the Higgs
boson at a proposed muon collider. We find that with a beam energy resolution
of R=0.01% (0.003%) and integrated luminosity of 0.5 fb^{-1} (1 fb^{-1}), a
muon collider would enable us to determine the Standard-Model-like Higgs width
to +/- 0.35 MeV (+/- 0.15 MeV) by combining two complementary channels of the
WW^* and b\bar b final states. A non-Standard-Model Higgs with a broader width
is also studied. The unparalleled accuracy potentially attainable at a muon
collider would test the Higgs interactions to a high precision.Comment: 7 pages, 5 figures. Version appeared on Physical Review
Running after Diphoton
A very plausible explanation for the recently observed diphoton excess at the
13 TeV LHC is a (pseudo)scalar with mass around 750 GeV, which couples to a
gluon pair and to a photon pair through loops involving vector-like quarks
(VLQs). To accommodate the observed rate, the required Yukawa couplings tend to
be large. A large Yukawa coupling would rapidly run up with the scale and
quickly reach the perturbativity bound, indicating that new physics, possibly
with a strong dynamics origin, is near by. The case becomes stronger especially
if the ATLAS observation of a large width persists. In this paper we study the
implication on the scale of new physics from the 750 GeV diphoton excess using
the method of renormalization group running with careful treatment of different
contributions and perturbativity criterion. Our results suggest that the scale
of new physics is generically not much larger than the TeV scale, in particular
if the width of the hinted (pseudo)scalar is large. Introducing multiple copies
of VLQs, lowing the VLQ masses and enlarging VLQ electric charges help reduce
the required Yukawa couplings and can push the cutoff scale to higher values.
Nevertheless, if the width of the 750 GeV resonance turns out to be larger than
about 1 GeV, it is very hard to increase the cutoff scale beyond a few TeVs.
This is a strong hint that new particles in addition to the 750 GeV resonance
and the vector-like quarks should be around the TeV scale.Comment: 19 pages, 6 figures; v3: corrected Eq.(2.6) and (3.1), updated
reference
Study of the pure annihilation decays
In this work, we calculate the {\it CP}-averaged branching ratios and the
polarization fractions of the charmless hadronic decays
within the framework of perturbative QCD(pQCD) approach, where is either a
light or axial-vector meson. These thirty two decay modes can
occur through the annihilation topology only. Based on the perturbative
calculations and phenomenological analysis, we find the following results: (a)
the branching ratios of the considered thirty two decays are
in the range of to ; (b) , \ov{K}_1^0
K_1^+ and some other decays have sizable branching ratios and can be measured
at the LHC experiments; (c) the branching ratios of decays are generally much larger than those of decays with a factor around (10 100); (d) the branching
ratios of B_c \to \ov{K}_1^0 K_1^+ decays are sensitive to the value of
, which will be tested by the running LHC and forthcoming SuperB
experiments; (e) the large longitudinal polarization contributions govern most
considered decays and play the dominant role.Comment: 19 pages, 1 eps figure. arXiv admin note: some text overlap with
arXiv:1003.392
Branching ratios of decays in the perturbative QCD approach
In this paper we calculate the branching ratios (BRs) of the 32 charmless
hadronic decays () by employing the perturbative
QCD(pQCD) factorization approach. These considered decay channels can only
occur via annihilation type diagrams in the standard model. From the numerical
calculations and phenomenological analysis, we found the following results: (a)
the pQCD predictions for the BRs of the considered decays are in the
range of to , while the CP-violating asymmetries are absent
because only one type tree operator is involved here; (b) the BRs of processes are generally much larger than those of ones due to
the large CKM factor of ; (c) since the behavior for
meson is much different from that of meson, the BRs of decays are generally larger than that of
decays; (d) the pQCD predictions for the BRs of B_c \to (K_1(1270), K_1(1400))
\etap and decays are rather sensitive to the value
of the mixing angle .Comment: 1+17 pges, 1 figure, refs. added and some clarifications made,
accepted for publication in Phys.Rev.
Multi-engine packet classification hardware accelerator
As line rates increase, the task of designing high performance architectures with reduced power consumption for the processing of router traffic remains important. In this paper, we present a multi-engine packet classification hardware accelerator, which gives increased performance and reduced power consumption. It follows the basic idea of decision-tree based packet classification algorithms, such as HiCuts and HyperCuts, in which the hyperspace represented by the ruleset is recursively divided into smaller subspaces according to some heuristics. Each classification engine consists of a Trie Traverser which is responsible for finding the leaf node corresponding to the incoming packet, and a Leaf Node Searcher that reports the matching rule in the leaf node. The packet classification engine utilizes the possibility of ultra-wide memory word provided by FPGA block RAM to store the decision tree data structure, in an attempt to reduce the number of memory accesses needed for the classification. Since the clock rate of an individual engine cannot catch up to that of the internal memory, multiple classification engines are used to increase the throughput. The implementations in two different FPGAs show that this architecture can reach a searching speed of 169 million packets per second (mpps) with synthesized ACL, FW and IPC rulesets. Further analysis reveals that compared to state of the art TCAM solutions, a power savings of up to 72% and an increase in throughput of up to 27% can be achieved
Field-based branch prediction for packet processing engines
Network processors have exploited many aspects of architecture design, such as employing multi-core, multi-threading and hardware accelerator, to support both the ever-increasing line rates and the higher complexity of network applications. Micro-architectural techniques like superscalar, deep pipeline and speculative execution provide an excellent method of improving performance without limiting either the scalability or flexibility, provided that the branch penalty is well controlled. However, it is difficult for traditional branch predictor to keep increasing the accuracy by using larger tables, due to the fewer variations in branch patterns of packet processing. To improve the prediction efficiency, we propose a flow-based prediction mechanism which caches the branch histories of packets with similar header fields, since they normally undergo the same execution path. For packets that cannot find a matching entry in the history table, a fallback gshare predictor is used to provide branch direction. Simulation results show that the our scheme achieves an average hit rate in excess of 97.5% on a selected set of network applications and real-life packet traces, with a similar chip area to the existing branch prediction architectures used in modern microprocessors
Heavy Higgs bosons and the 2 TeV boson
The hints from the LHC for the existence of a boson of mass around 1.9
TeV point towards a certain gauge
theory with an extended Higgs sector. We show that the decays of the boson
into heavy Higgs bosons have sizable branching fractions. Interpreting the
ATLAS excess events in the search for same-sign lepton pairs plus jets as
arising from cascade decays, we estimate that the masses of the heavy
Higgs bosons are in the 400--700 GeV range.Comment: 22 pages; v2: Eqs. 3.6 and 3.8 corrected, clarifications and
references adde
- …