46,824 research outputs found
Fermion masses in the economical 3-3-1 model
We show that, in frameworks of the economical 3-3-1 model, all fermions get
masses. At the tree level, one up-quark and two down-quarks are massless, but
the one-loop corrections give all quarks the consistent masses. This conclusion
is in contradiction to the previous analysis in which, the third scalar triplet
has been introduced. This result is based on the key properties of the model:
First, there are three quite different scales of vacuum expectation values:
\om \sim {\cal O}(1) \mathrm{TeV}, v \approx 246 \mathrm{GeV} and . Second, there exist two types of Yukawa couplings
with different strengths: the lepton-number conserving couplings 's and the
lepton-number violating ones 's satisfying the condition in which the second
are much smaller than the first ones: .
With the acceptable set of parameters, numerical evaluation shows that in
this model, masses of the exotic quarks also have different scales, namely, the
exotic quark () gains mass GeV, while the
D_\al exotic quarks (q_{D_\al} = -1/3) have masses in the TeV scale:
m_{D_\al} \in 10 \div 80 TeV.Comment: 20 pages, 8 figure
A deep level set method for image segmentation
This paper proposes a novel image segmentation approachthat integrates fully
convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the
integrated method can incorporatesmoothing and prior information to achieve an
accurate segmentation.Furthermore, different than using the level set model as
a post-processingtool, we integrate it into the training phase to fine-tune the
FCN. Thisallows the use of unlabeled data during training in a
semi-supervisedsetting. Using two types of medical imaging data (liver CT and
left ven-tricle MRI data), we show that the integrated method achieves
goodperformance even when little training data is available, outperformingthe
FCN or the level set model alone
Refinement Type Inference via Horn Constraint Optimization
We propose a novel method for inferring refinement types of higher-order
functional programs. The main advantage of the proposed method is that it can
infer maximally preferred (i.e., Pareto optimal) refinement types with respect
to a user-specified preference order. The flexible optimization of refinement
types enabled by the proposed method paves the way for interesting
applications, such as inferring most-general characterization of inputs for
which a given program satisfies (or violates) a given safety (or termination)
property. Our method reduces such a type optimization problem to a Horn
constraint optimization problem by using a new refinement type system that can
flexibly reason about non-determinism in programs. Our method then solves the
constraint optimization problem by repeatedly improving a current solution
until convergence via template-based invariant generation. We have implemented
a prototype inference system based on our method, and obtained promising
results in preliminary experiments.Comment: 19 page
Concept, realization and characterization of serially powered pixel modules
We prove and demonstrate here for the example of the large scale pixel detector of ATLAS that Serial Powering of pixel modules is a viable alternative and that has been devised and implemented for ATLAS pixel modules using dedicated on-chip voltage regulators and modified flex hybrids circuits. The equivalent of a pixel ladder consisting of six serially powered pixel modules with about 0.3Mpixels has been built and the performance with respect to noise and threshold stability and operation failures has been studied. We believe that Serial Powering in general will be necessary for future large scale tracking detectors
Neutrino masses in the economical 3-3-1 model
We show that, in frameworks of the economical 3-3-1 model, the suitable
pattern of neutrino masses arises from the three quite different sources - the
lepton-number conserving, the spontaneous lepton-number breaking and the
explicit lepton-number violating, widely ranging over the mass scales including
the GUT one: , , \om\sim
O(1) \mathrm{TeV} and . At
the tree-level, the model contains three Dirac neutrinos: one massless, two
large with degenerate masses in the order of the electron mass. At the one-loop
level, the left-handed and right-handed neutrinos obtain Majorana masses
in orders of and degenerate in
, while the Dirac masses get a large reduction down to
scale through a finite mass renormalization. In this model, the contributions
of new physics are strongly signified, the degenerations in the masses and the
last hierarchy between the Majorana and Dirac masses can be completely removed
by heavy particles. All the neutrinos get mass and can fit the data.Comment: 15 pages, 8 figure
Lorentz violating kinematics: Threshold theorems
Recent tentative experimental indications, and the subsequent theoretical
speculations, regarding possible violations of Lorentz invariance have
attracted a vast amount of attention. An important technical issue that
considerably complicates detailed calculations in any such scenario, is that
once one violates Lorentz invariance the analysis of thresholds in both
scattering and decay processes becomes extremely subtle, with many new and
naively unexpected effects. In the current article we develop several extremely
general threshold theorems that depend only on the existence of some energy
momentum relation E(p), eschewing even assumptions of isotropy or monotonicity.
We shall argue that there are physically interesting situations where such a
level of generality is called for, and that existing (partial) results in the
literature make unnecessary technical assumptions. Even in this most general of
settings, we show that at threshold all final state particles move with the
same 3-velocity, while initial state particles must have 3-velocities
parallel/anti-parallel to the final state particles. In contrast the various
3-momenta can behave in a complicated and counter-intuitive manner.Comment: V1: 32 pages, 6 figures, 3 tables. V2: 5 references adde
Anesthesia assessment based on ICA permutation entropy analysis of two-channel EEG signals
Inaccurate assessment may lead to inaccurate levels of dosage given to the patients that may lead to intraoperative awareness that is caused by under dosage during surgery or prolonged recovery in patients that is caused by over dosage after the surgery is done. Previous research and evidence show that assessing anesthetic levels with the help of electroencephalography (EEG) signals gives an overall better aspect of the patient’s anesthetic state. This paper presents a new method to assess the depth of anesthesia (DoA) using Independent Component Analysis (ICA) and permutation entropy analysis. ICA is performed on two-channel EEG to reduce the noise then Wavelet and permutation entropy are applied on these channels to extract the features. A linear regression model was used to build the new DoA index using the selected features. The new index designed by proposed methods performs well under low signal quality and it was overall consistent in most of the cases where Bispectral index (BIS) may fail to provide any valid value
Noise-Resilient Group Testing: Limitations and Constructions
We study combinatorial group testing schemes for learning -sparse Boolean
vectors using highly unreliable disjunctive measurements. We consider an
adversarial noise model that only limits the number of false observations, and
show that any noise-resilient scheme in this model can only approximately
reconstruct the sparse vector. On the positive side, we take this barrier to
our advantage and show that approximate reconstruction (within a satisfactory
degree of approximation) allows us to break the information theoretic lower
bound of that is known for exact reconstruction of
-sparse vectors of length via non-adaptive measurements, by a
multiplicative factor .
Specifically, we give simple randomized constructions of non-adaptive
measurement schemes, with measurements, that allow efficient
reconstruction of -sparse vectors up to false positives even in the
presence of false positives and false negatives within the
measurement outcomes, for any constant . We show that, information
theoretically, none of these parameters can be substantially improved without
dramatically affecting the others. Furthermore, we obtain several explicit
constructions, in particular one matching the randomized trade-off but using measurements. We also obtain explicit constructions
that allow fast reconstruction in time \poly(m), which would be sublinear in
for sufficiently sparse vectors. The main tool used in our construction is
the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the
same title) in proceedings of the 17th International Symposium on
Fundamentals of Computation Theory (FCT 2009
Yang-Mills instantons and dyons on homogeneous G_2-manifolds
We consider Lie G-valued Yang-Mills fields on the space R x G/H, where G/H is
a compact nearly K"ahler six-dimensional homogeneous space, and the manifold R
x G/H carries a G_2-structure. After imposing a general G-invariance condition,
Yang-Mills theory with torsion on R x G/H is reduced to Newtonian mechanics of
a particle moving in R^6, R^4 or R^2 under the influence of an inverted
double-well-type potential for the cases G/H = SU(3)/U(1)xU(1),
Sp(2)/Sp(1)xU(1) or G_2/SU(3), respectively. We analyze all critical points and
present analytical and numerical kink- and bounce-type solutions, which yield
G-invariant instanton configurations on those cosets. Periodic solutions on S^1
x G/H and dyons on iR x G/H are also given.Comment: 1+26 pages, 14 figures, 6 miniplot
- …