9,929 research outputs found
The 125 GeV boson: A composite scalar?
Assuming that the 125 GeV particle observed at the LHC is a composite scalar
and responsible for the electroweak gauge symmetry breaking, we consider the
possibility that the bound state is generated by a non-Abelian gauge theory
with dynamically generated gauge boson masses and a specific chiral symmetry
breaking dynamics motivated by confinement. The scalar mass is computed with
the use of the Bethe-Salpeter equation and its normalization condition as a
function of the SU(N) group and the respective fermionic representation. If the
fermions that form the composite state are in the fundamental representation of
the SU(N) group, we can generate such light boson only for one specific number
of fermions for each group. In the case of small groups, like SU(2) to SU(5),
and two fermions in the adjoint representation we find that is quite improbable
to generate such light composite scalar.Comment: 24 pages, 5 figures, discussion extended, references added; version
to appear in Phys. Rev.
The use of heterocaryons in the maintenance of slime stocks of Neurospora crassa, and a method for the re-isolation of slime from heterocaryons
The use of heterocaryons in the maintenance of slime stocks of Neurospora crassa, and a method for the re-isolation of slime from heterocaryon
The Importance of Statistical Theory in Outlier Detection
We explore the performance of the outlier-sum statistic (Tibshirani and Hastie, Biostatistics 2007 8:2--8), a proposed method for identifying genes for which only a subset of a group of samples or patients exhibits differential expression levels. Our discussion focuses on this method as an example of how inattention to standard statistical theory can lead to approaches that exhibit some serious drawbacks. In contrast to the results presented by those authors, when comparing this method to several variations of the -test, we find that the proposed method offers little benefit even in the most idealized scenarios, and suffers from a number of limitations including difficulty of calibration, high false positive rates owing to its asymmetric treatment of groups, poor power or discriminatory ability under many alternatives, and poorly defined application to one-sample settings. Further issues in the Tibshirani and Hastie paper concern the presentation and accuracy of their simulation results; we were unable to reproduce their findings, and we discuss several undesirable and implausible aspects of their results
Genetic nature of the slime variant of Neurospora crassa
Genetic nature of the slime variant of Neurospora crass
Some Observations on the Wilcoxon Rank Sum Test
This manuscript presents some general comments about the Wilcoxon rank sum test. Even the most casual reader will gather that I am not too impressed with the scientific usefulness of the Wilcoxon test. However, the actual motivation is more to illustrate differences between parametric, semiparametric, and nonparametric (distribution-free) inference, and to use this example to illustrate how many misconceptions have been propagated through a focus on (semi)parametric probability models as the basis for evaluating commonly used statistical analysis models. The document itself arose as a teaching tool for courses aimed at graduate students in biostatistics and statistics, with parts of the document originally written for applied biostatistics classes and parts written for a course in mathematical statistics. Hence, some of the material is also meant to provide an illustration of common methods of deriving moments of distributions, etc
Scalable Noise Estimation with Random Unitary Operators
We describe a scalable stochastic method for the experimental measurement of
generalized fidelities characterizing the accuracy of the implementation of a
coherent quantum transformation. The method is based on the motion reversal of
random unitary operators. In the simplest case our method enables direct
estimation of the average gate fidelity. The more general fidelities are
characterized by a universal exponential rate of fidelity loss. In all cases
the measurable fidelity decrease is directly related to the strength of the
noise affecting the implementation -- quantified by the trace of the
superoperator describing the non--unitary dynamics. While the scalability of
our stochastic protocol makes it most relevant in large Hilbert spaces (when
quantum process tomography is infeasible), our method should be immediately
useful for evaluating the degree of control that is achievable in any prototype
quantum processing device. By varying over different experimental arrangements
and error-correction strategies additional information about the noise can be
determined.Comment: 8 pages; v2: published version (typos corrected; reference added
- …