6,165 research outputs found
A quantum model of dark energy
We propose a quantum model of dark energy. The proposed candidate for dark
energy is gluon field, as is well-known, gluons are the elementary particles.
We assume that gluons may not be completely massless but have tiny masses, thus
the gluon field can provide a non-zero energy-momentum tensor. This model
corresponds to Einstein's cosmological constant which is one of the generally
accepted models for dark energy. Besides the gluon field, we also discuss the
properties of electroweak boson field and compare our results with previous
known results.Comment: 4 page
Quantum gravity and mass of gauge field: a four-dimensional unified quantum theory
We present in detail a four-dimensional unified quantum theory. In this
theory, we identify three class of parameters, coordinate-momentum, spin and
gauge, as all and as the only fundamental parameters to describe quantum
fields. The coordinate-momentum is formulated by the general relativity in
four-dimensional space-time. This theory satisfies the general covariance
condition and the general covariance derivative operator is given. In an
unified and combined description, the matter fields, gravity field and gauge
fields satisfy Dirac equation, Einstein equation and Yang-Mills equation in
operator form. In the framework of our theory, we mainly realize the following
aims: (1) The gravity field is described by a quantum theory, the graviton is
massless, it is spin-2; (2) The mass problem of gauge theory is solved. Mass
arises naturally from the gauge space and thus Higgs mechanism is not
necessary; (3) Color confinement of quarks is explained; (4) Parity violation
for weak interactions is obtained; (5) Gravity will cause CPT violation; (6) A
dark energy solution of quantum theory is presented. It corresponds to
Einstein's cosmological constant. We propose that the candidate for dark energy
should be gluon which is one of the elementary particles.Comment: 86 pages, v2 typos correcte
Adaptive Policies for Scheduling with Reconfiguration Delay: An End-to-End Solution for All-Optical Data Centers
All-optical switching networks have been considered a promising candidate for
the next generation data center networks thanks to its scalability in data
bandwidth and power efficiency. However, the bufferless nature and the nonzero
recon- figuration delay of optical switches remain great challenges in
deploying all-optical networks. This paper considers the end-to- end scheduling
for all-optical data center networks with no in- network buffer and nonzero
reconfiguration delay. A framework is proposed to deal with the nonzero
reconfiguration delay. The proposed approach constructs an adaptive variant of
any given scheduling policy. It is shown that if a scheduling policy guarantees
its schedules to have schedule weights close to the MaxWeight schedule (and
thus is throughput optimal in the zero reconfiguration regime), then the
throughput optimality is inherited by its adaptive variant (in any nonzero
reconfiguration delay regime). As a corollary, a class of adaptive variants of
the well known MaxWeight policy is shown to achieve throughput optimality
without prior knowledge of the traffic load. Further- more, through numerical
simulations, the simplest such policy, namely the Adaptive MaxWeight (AMW), is
shown to exhibit better delay performance than all prior work
Direct Measure of Quantum Correlation
The quantumness of the correlation known as quantum correlation is usually
measured by quantum discord. So far various quantum discords can be roughly
understood as indirect measure by some special discrepancy of two quantities.
We present a direct measure of quantum correlation by revealing the difference
between the structures of classically and quantum correlated states. Our
measure explicitly includes the contributions of the inseparability and local
non-orthogonality of the eigenvectors of a density. Besides its relatively easy
computability, our measure can provide a unified understanding of quantum
correlation of all the present versions
The witness of sudden change of geometric quantum correlation
In this paper, we give a sufficient and necessary condition (witness) for the
sudden change of geometric quantum discord by considering mathematical
definition of the discontinuity of a function. Based on the witness, we can
find out various sudden changes of quan- tum correlation by considering both
the Markovian and the non-Markovian cases. In particular, we can accurately
find out critical points of the sudden changes even though they are not quite
obvious in the graphical representation. In addition, one can also find that
sudden change of quantum correlation, like the frozen quantum correlation,
strongly depends on the choice of the quantum correlation measure.Comment: 14 pages, 6 figures. To appear in Quantum Information and Computatio
Quantum Dissonance Is Rejected in an Overlap Measurement Scheme
The overlap measurement scheme accomplishes to evaluate the overlap of two
input quantum states by only measuring an introduced auxiliary qubit,
irrespective of the complexity of the two input states. We find a
counterintuitive phenomenon that no quantum dissonance can be found, even
though the auxiliary qubit might be entangled, classically correlated or even
uncorrelated with the two input states based on different types of input
states. In principle, this provides an opposite but supplementary example to
the remarkable algorithm of the deterministic quantum computation with one
qubit in which no entanglement is present. Finally, we consider a simple
overlap measurement model to demonstrate the continuous change (including
potential sudden death of quantum discord) with the input states from entangled
to product states by only adjusting some simple initial parameters.Comment: 5pages and 3 figures,To appear in PR
Detecting a physical difference between the CDM halos in simulation and in nature
Numerical simulation is an important tool to help us understand the process
of structure formation in the universe. However many simulation results of cold
dark matter (CDM) halos on small scale are inconsistent with observations: the
central density profile is too cuspy and there are too many substructures. Here
we point out that these two problems may be connected with a hitherto
unrecognized bias in the simulation halos. Although CDM halos in nature and in
simulation are both virialized systems of collisionless CDM particles,
gravitational encounter cannot be neglected in the simulation halos because
they contain much less particles. We demonstrate this by two numerical
experiments, showing that there is a difference on the microcosmic scale
between the natural and simulation halos. The simulation halo is more akin to
globular clusters where gravitational encounter is known to lead to such
drastic phenomena as core collapse. And such artificial core collapse process
appears to link the two problems together in the bottom-up scenario of
structure formation in the CDM universe. The discovery of this bias
also has implications on the applicability of the Jeans Theorem in Galactic
Dynamics.Comment: 5 pages, 4 figures, ApJ Letters submitted. Comments and suggestions
welcom
K-sets+: a Linear-time Clustering Algorithm for Data Points with a Sparse Similarity Measure
In this paper, we first propose a new iterative algorithm, called the K-sets+
algorithm for clustering data points in a semi-metric space, where the distance
measure does not necessarily satisfy the triangular inequality. We show that
the K-sets+ algorithm converges in a finite number of iterations and it retains
the same performance guarantee as the K-sets algorithm for clustering data
points in a metric space. We then extend the applicability of the K-sets+
algorithm from data points in a semi-metric space to data points that only have
a symmetric similarity measure. Such an extension leads to great reduction of
computational complexity. In particular, for an n * n similarity matrix with m
nonzero elements in the matrix, the computational complexity of the K-sets+
algorithm is O((Kn + m)I), where I is the number of iterations. The memory
complexity to achieve that computational complexity is O(Kn + m). As such, both
the computational complexity and the memory complexity are linear in n when the
n * n similarity matrix is sparse, i.e., m = O(n). We also conduct various
experiments to show the effectiveness of the K-sets+ algorithm by using a
synthetic dataset from the stochastic block model and a real network from the
WonderNetwork website
Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering
Graph convolutional networks (GCNs) are powerful tools for graph-structured
data. However, they have been recently shown to be prone to topological
attacks. Despite substantial efforts to search for new architectures, it still
remains a challenge to improve performance in both benign and adversarial
situations simultaneously. In this paper, we re-examine the fundamental
building block of GCN---the Laplacian operator---and highlight some basic flaws
in the spatial and spectral domains. As an alternative, we propose an operator
based on graph powering, and prove that it enjoys a desirable property of
"spectral separation." Based on the operator, we propose a robust learning
paradigm, where the network is trained on a family of "'smoothed" graphs that
span a spatial and spectral range for generalizability. We also use the new
operator in replacement of the classical Laplacian to construct an architecture
with improved spectral robustness, expressivity and interpretability. The
enhanced performance and robustness are demonstrated in extensive experiments
Quasireplicas and universal lengths of microbial genomes
Statistical analysis of distributions of occurrence frequencies of short
words in 108 microbial complete genomes reveals the existence of a set of
universal "root-sequence lengths" shared by all microbial genomes. These
lengths and their universality give powerful clues to the way microbial genomes
are grown. We show that the observed genomic properties are explained by a
model for genome growth in which primitive genomes grew mainly by maximally
stochastic duplications of short segments from an initial length of about 200
nucleotides (nt) to a length of about one million nt typical of microbial
genomes. The relevance of the result of this study to the nature of
simultaneous random growth and information acquisition by genomes, to the
so-called RNA world in which life evolved before the rise of proteins and
enzymes and to several other topics are discussed.Comment: 4 pages 3 figure
- …