15,483 research outputs found
Heavy QQ(bar) "Fireball" Annihilation to Multiple Vector Bosons
Drawing analogy of replacing the nucleon by heavy chiral quark , the pion
by Goldstone boson , and coupling by coupling, we construct a
statistical model for annihilation, i.e. into longitudinal
weak bosons. This analogy is becoming prescient since the LHC direct bound GeV implies strong Yukawa coupling. Taking TeV, the
mean number ranges from 6 to over 10, with negligible two or three
boson production. With individual or decays suppressed either by
phase space or quark mixing, and given the strong Yukawa coupling, is the likely outcome for very heavy production at the LHC.Comment: 4 pages, 1 figur
A statistical framework for the design of microarray experiments and effective detection of differential gene expression
Four reasons why you might wish to read this paper: 1. We have devised a new
statistical T test to determine differentially expressed genes (DEG) in the
context of microarray experiments. This statistical test adds a new member to
the traditional T-test family. 2. An exact formula for calculating the
detection power of this T test is presented, which can also be fairly easily
modified to cover the traditional T tests. 3. We have presented an accurate yet
computationally very simple method to estimate the fraction of non-DEGs in a
set of genes being tested. This method is superior to an existing one which is
computationally much involved. 4. We approach the multiple testing problem from
a fresh angle, and discuss its relation to the classical Bonferroni procedure
and to the FDR (false discovery rate) approach. This is most useful in the
analysis of microarray data, where typically several thousands of genes are
being tested simultaneously.Comment: 9 pages, 1 table; to appear in Bioinformatic
Energy Models for One-Carrier Transport in Semiconductor Devices
Moment models of carrier transport, derived from the Boltzmann equation, made possible the simulation of certain key effects through such realistic assumptions as energy dependent mobility functions. This type of global dependence permits the observation of velocity overshoot in the vicinity of device junctions, not discerned via classical drift-diffusion models, which are primarily local in nature. It was found that a critical role is played in the hydrodynamic model by the heat conduction term. When ignored, the overshoot is inappropriately damped. When the standard choice of the Wiedemann-Franz law is made for the conductivity, spurious overshoot is observed. Agreement with Monte-Carlo simulation in this regime required empirical modification of this law, or nonstandard choices. Simulations of the hydrodynamic model in one and two dimensions, as well as simulations of a newly developed energy model, the RT model, are presented. The RT model, intermediate between the hydrodynamic and drift-diffusion model, was developed to eliminate the parabolic energy band and Maxwellian distribution assumptions, and to reduce the spurious overshoot with physically consistent assumptions. The algorithms employed for both models are the essentially non-oscillatory shock capturing algorithms. Some mathematical results are presented and contrasted with the highly developed state of the drift-diffusion model
A Monte Carlo Evaluation of the Efficiency of the PCSE Estimator
Panel data characterized by groupwise heteroscedasticity, cross-sectional correlation, and AR(1) serial correlation pose problems for econometric analyses. It is well known that the asymptotically efficient, FGLS estimator (Parks) sometimes performs poorly in finite samples. In a widely cited paper, Beck and Katz (1995) claim that their estimator (PCSE) is able to produce more accurate coefficient standard errors without any loss in efficiency in ¡°practical research situations.¡± This study disputes that claim. We find that the PCSE estimator is usually less efficient than Parks -- and substantially so -- except when the number of time periods is close to the number of cross-sections.Panel data estimation; Monte Carlo analysis; FGLS; Parks; PCSE; finite sample
A Revisit to Top Quark Forward-Backward Asymmetry
We analyze various models for the top quark forward-backward asymmetry
() at the Tevatron, using the latest CDF measurements on different
s and the total cross section. The axigluon model in Ref. \cite{paul}
has difficulties in explaining the large rapidity dependent asymmetry and mass
dependent asymmetry simultaneously and the parameter space relevant to
is ruled out by the latest dijet search at ATLAS. In contrast to
Ref. \cite{cp}, we demonstrate that the large parameter space in this model
with a flavor symemtry is not ruled out by flavor physics. The
-channel flavor-violating \cite{hitoshi},
\cite{waiyee} and diquark \cite{tim} models all have parameter
regions that satisfy different measurements within 1 .
However, the heavy model which can be marginally consistent with
the total cross section is severely constrained by the Tevatron direct search
of same-sign top quark pair. The diquark model suffers from too large total
cross section and is difficult to fit the invariant mass
distribution. The electroweak precision constraints on the model based on
- mixings is estimated and the result is rather weak (
GeV). Therefore, the heavy model seems to give the best fit for
all the measurements. The model predicts the signal
from production and is 10%-50% of SM at the 7 TeV LHC.
Such resonance can serve as the direct test of the model.Comment: 25 pages, 7 figures, 1 tabl
- …