6,187 research outputs found
Growing Perfect Decagonal Quasicrystals by Local Rules
A local growth algorithm for a decagonal quasicrystal is presented. We show
that a perfect Penrose tiling (PPT) layer can be grown on a decapod tiling
layer by a three dimensional (3D) local rule growth. Once a PPT layer begins to
form on the upper layer, successive 2D PPT layers can be added on top resulting
in a perfect decagonal quasicrystalline structure in bulk with a point defect
only on the bottom surface layer. Our growth rule shows that an ideal
quasicrystal structure can be constructed by a local growth algorithm in 3D,
contrary to the necessity of non-local information for a 2D PPT growth.Comment: 4pages, 2figure
Practical purification scheme for decohered coherent-state superpositions via partial homodyne detection
We present a simple protocol to purify a coherent-state superposition that
has undergone a linear lossy channel. The scheme constitutes only a single beam
splitter and a homodyne detector, and thus is experimentally feasible. In
practice, a superposition of coherent states is transformed into a classical
mixture of coherent states by linear loss, which is usually the dominant
decoherence mechanism in optical systems. We also address the possibility of
producing a larger amplitude superposition state from decohered states, and
show that in most cases the decoherence of the states are amplified along with
the amplitude.Comment: 8 pages, 10 figure
Singlet-doublet Higgs mixing and its implications on the Higgs mass in the PQ-NMSSM
We examine the implications of singlet-doublet Higgs mixing on the properties
of a Standard Model (SM)-like Higgs boson within the Peccei-Quinn invariant
extension of the NMSSM (PQ-NMSSM). The SM singlet added to the Higgs sector
connects the PQ and visible sectors through a PQ-invariant non-renormalizable
K\"ahler potential term, making the model free from the tadpole and domain-wall
problems. For the case that the lightest Higgs boson is dominated by the
singlet scalar, the Higgs mixing increases the mass of a SM-like Higgs boson
while reducing its signal rate at collider experiments compared to the SM case.
The Higgs mixing is important also in the region of parameter space where the
NMSSM contribution to the Higgs mass is small, but its size is limited by the
experimental constraints on the singlet-like Higgs boson and on the lightest
neutralino constituted mainly by the singlino whose Majorana mass term is
forbidden by the PQ symmetry. Nonetheless the Higgs mixing can increase the
SM-like Higgs boson mass by a few GeV or more even when the Higgs signal rate
is close to the SM prediction, and thus may be crucial for achieving a 125 GeV
Higgs mass, as hinted by the recent ATLAS and CMS data. Such an effect can
reduce the role of stop mixing.Comment: 26 pages, 3 figures; published in JHE
Natural Islands for a 125 GeV Higgs in the scale-invariant NMSSM
We study whether a 125 GeV standard model-like Higgs boson can be
accommodated within the scale-invariant NMSSM in a way that is natural in all
respects, i.e., not only is the stop mass and hence its loop contribution to
Higgs mass of natural size, but we do not allow significant tuning of NMSSM
parameters as well. We pursue as much as possible an analytic approach which
gives clear insights on various ways to accommodate such a Higgs mass, while
conducting complementary numerical analyses. We consider both scenarios with
singlet-like state being heavier and lighter than SM-like Higgs. With A-terms
being small, we find for the NMSSM to be perturbative up to GUT scale, it is
not possible to get 125 GeV Higgs mass, which is true even if we tune
parameters of NMSSM. If we allow some of the couplings to become
non-perturbative below the GUT scale, then the non-tuned option implies that
the singlet self-coupling, kappa, is larger than the singlet-Higgs coupling,
lambda, which itself is order 1. This leads to a Landau pole for these
couplings close to the weak scale, in particular below ~10^4 TeV. In both the
perturbative and non-perturbative NMSSM, allowing large A_lambda, A_kappa gives
"more room" to accommodate a 125 GeV Higgs, but a tuning of these A-terms may
be needed. In our analysis we also conduct a careful study of the constraints
on the parameter space from requiring global stability of the desired vacuum
fitting a 125 GeV Higgs, which is complementary to existing literature. In
particular, as the singlet-Higgs coupling lambda increases, vacuum stability
becomes more serious of an issue.Comment: 34 pages, 4 figures, references added, minor corrections to text and
figures, version to be published in JHE
Asymmetric quantum channel for quantum teleportation
There are a few obstacles, which bring about imperfect quantum teleportation
of a continuous variable state, such as unavailability of maximally entangled
two-mode squeezed states, inefficient detection and imperfect unitary
transformation at the receiving station. We show that all those obstacles can
be understood by a combination of an {\it asymmetrically-decohered} quantum
channel and perfect apparatuses for other operations. For the
asymmetrically-decohered quantum channel, we find some counter-intuitive
results; one is that teleportation does not necessarily get better as the
channel is initially squeezed more and another is when one branch of the
quantum channel is unavoidably subject to some imperfect operations, blindly
making the other branch as clean as possible may not result in the best
teleportation result. We find the optimum strategy to teleport an unknown field
for a given environment or for a given initial squeezing of the channel.Comment: 4pages, 1figur
Prediction of lethal and synthetically lethal knock-outs in regulatory networks
The complex interactions involved in regulation of a cell's function are
captured by its interaction graph. More often than not, detailed knowledge
about enhancing or suppressive regulatory influences and cooperative effects is
lacking and merely the presence or absence of directed interactions is known.
Here we investigate to which extent such reduced information allows to forecast
the effect of a knock-out or a combination of knock-outs. Specifically we ask
in how far the lethality of eliminating nodes may be predicted by their network
centrality, such as degree and betweenness, without knowing the function of the
system. The function is taken as the ability to reproduce a fixed point under a
discrete Boolean dynamics. We investigate two types of stochastically generated
networks: fully random networks and structures grown with a mechanism of node
duplication and subsequent divergence of interactions. On all networks we find
that the out-degree is a good predictor of the lethality of a single node
knock-out. For knock-outs of node pairs, the fraction of successors shared
between the two knocked-out nodes (out-overlap) is a good predictor of
synthetic lethality. Out-degree and out-overlap are locally defined and
computationally simple centrality measures that provide a predictive power
close to the optimal predictor.Comment: published version, 10 pages, 6 figures, 2 tables; supplement at
http://www.bioinf.uni-leipzig.de/publications/supplements/11-01
Edge overload breakdown in evolving networks
We investigate growing networks based on Barabasi and Albert's algorithm for
generating scale-free networks, but with edges sensitive to overload breakdown.
the load is defined through edge betweenness centrality. We focus on the
situation where the average number of connections per vertex is, as the number
of vertices, linearly increasing in time. After an initial stage of growth, the
network undergoes avalanching breakdowns to a fragmented state from which it
never recovers. This breakdown is much less violent if the growth is by random
rather than preferential attachment (as defines the Barabasi and Albert model).
We briefly discuss the case where the average number of connections per vertex
is constant. In this case no breakdown avalanches occur. Implications to the
growth of real-world communication networks are discussed.Comment: To appear in Phys. Rev.
The acceleration and storage of radioactive ions for a neutrino factory
The term beta-beam has been coined for the production of a pure beam of
electron neutrinos or their antiparticles through the decay of radioactive ions
circulating in a storage ring. This concept requires radioactive ions to be
accelerated to a Lorentz gamma of 150 for 6He and 60 for 18Ne. The neutrino
source itself consists of a storage ring for this energy range, with long
straight sections in line with the experiment(s). Such a decay ring does not
exist at CERN today, nor does a high-intensity proton source for the production
of the radioactive ions. Nevertheless, the existing CERN accelerator
infrastructure could be used as this would still represent an important saving
for a beta-beam facility. This paper outlines the first study, while some of
the more speculative ideas will need further investigations.Comment: Accepted for publication in proceedings of Nufact02, London, 200
Automatic eduction and statistical analysis of coherent structures in the wall region of a confine plane
This paper describes a vortex detection algorithm used to expose and statistically characterize the
coherent flow patterns observable in the velocity vector fields measured by Particle Image
Velocimetry (PIV) in the impingement region of air curtains. The philosophy and the architecture of
this algorithm are presented. Its strengths and weaknesses are discussed. The results of a
parametrical analysis performed to assess the variability of the response of our algorithm to the 3
user-specified parameters in our eduction scheme are reviewed. The technique is illustrated in the
case of a plane turbulent impinging twin-jet with an opening ratio of 10. The corresponding jet
Reynolds number, based on the initial mean flow velocity U0 and the jet width e, is 14000. The
results of a statistical analysis of the size, shape, spatial distribution and energetic content of the
coherent eddy structures detected in the impingement region of this test flow are provided.
Although many questions remain open, new insights into the way these structures might form,
organize and evolve are given. Relevant results provide an original picture of the plane turbulent
impinging jet
Finding and evaluating community structure in networks
We propose and study a set of algorithms for discovering community structure
in networks -- natural divisions of network nodes into densely connected
subgroups. Our algorithms all share two definitive features: first, they
involve iterative removal of edges from the network to split it into
communities, the edges removed being identified using one of a number of
possible "betweenness" measures, and second, these measures are, crucially,
recalculated after each removal. We also propose a measure for the strength of
the community structure found by our algorithms, which gives us an objective
metric for choosing the number of communities into which a network should be
divided. We demonstrate that our algorithms are highly effective at discovering
community structure in both computer-generated and real-world network data, and
show how they can be used to shed light on the sometimes dauntingly complex
structure of networked systems.Comment: 16 pages, 13 figure
- …