3,612 research outputs found
Sharp Bounds for the Signless Laplacian Spectral Radius in Terms of Clique Number
In this paper, we present a sharp upper and lower bounds for the signless
Laplacian spectral radius of graphs in terms of clique number. Moreover, the
extremal graphs which attain the upper and lower bounds are characterized. In
addition, these results disprove the two conjectures on the signless Laplacian
spectral radius in [P. Hansen and C. Lucas, Bounds and conjectures for the
signless Laplacian index of graphs, Linear Algebra Appl., 432(2010) 3319-3336].Comment: 15 pages 1 figure; linear algebra and its applications 201
Some Issues in a Gauge Model of Unparticles
We address in a recent gauge model of unparticles the issues that are
important for consistency of a gauge theory, i.e., unitarity and Ward identity
of physical amplitudes. We find that non-integrable singularities arise in
physical quantities like cross section and decay rate from gauge interactions
of unparticles. We also show that Ward identity is violated due to the lack of
a dispersion relation for charged unparticles although the Ward-Takahashi
identity for general Green functions is incorporated in the model. A previous
observation that the unparticle's (with scaling dimension d) contribution to
the gauge boson self-energy is a factor (2-d) of the particle's has been
extended to the Green function of triple gauge bosons. This (2-d) rule may be
generally true for any point Green functions of gauge bosons. This implies that
the model would be trivial even as one that mimics certain dynamical effects on
gauge bosons in which unparticles serve as an interpolating field.Comment: v1:16 pages, 3 figures. v2: some clarifications made and presentation
improved, calculation and conclusion not modified; refs added and updated.
Version to appear in EPJ
Data-Collection for the Sloan Digital Sky Survey: a Network-Flow Heuristic
The goal of the Sloan Digital Sky Survey is ``to map in detail one-quarter of
the entire sky, determining the positions and absolute brightnesses of more
than 100 million celestial objects''. The survey will be performed by taking
``snapshots'' through a large telescope. Each snapshot can capture up to 600
objects from a small circle of the sky. This paper describes the design and
implementation of the algorithm that is being used to determine the snapshots
so as to minimize their number. The problem is NP-hard in general; the
algorithm described is a heuristic, based on Lagriangian-relaxation and
min-cost network flow. It gets within 5-15% of a naive lower bound, whereas
using a ``uniform'' cover only gets within 25-35%.Comment: proceedings version appeared in ACM-SIAM Symposium on Discrete
Algorithms (1998
Unparticle Physics and A_{FB}^b on the Z pole
An attempt has been made to address the 3\sigma anomaly of the
forward-backward asymmetry of b quark in LEP data via an unparticle sector. For
most part of the parameter space except certain particular regions, the anomaly
could not be explained away plausibly, when constraints from other LEP
observables are taken into account.Comment: Version to appear in Phys. Lett. B. 13 pages, 5 figure
Tunneling into a two-dimensional electron system in a strong magnetic field
We investigate the properties of the one-electron Green's function in an
interacting two-dimensional electron system in a strong magnetic field, which
describes an electron tunneling into such a system. From finite-size
diagonalization, we find that its spectral weight is suppressed near zero
energy, reaches a maximum at an energy of about , and
decays exponentially at higher energies. We propose a theoretical model to
account for the low-energy behavior. For the case of Coulomb interactions
between the electrons, at even-denominator filling factors such as ,
we predict that the spectral weight varies as , for
Casting Light on Dark Matter
The prospects for detecting a candidate supersymmetric dark matter particle
at the LHC are reviewed, and compared with the prospects for direct and
indirect searches for astrophysical dark matter. The discussion is based on a
frequentist analysis of the preferred regions of the Minimal supersymmetric
extension of the Standard Model with universal soft supersymmetry breaking (the
CMSSM). LHC searches may have good chances to observe supersymmetry in the near
future - and so may direct searches for astrophysical dark matter particles,
whereas indirect searches may require greater sensitivity, at least within the
CMSSM.Comment: 16 pages, 13 figures, contribution to the proceedings of the LEAP
2011 Conferenc
Non-Redundant Spectral Dimensionality Reduction
Spectral dimensionality reduction algorithms are widely used in numerous
domains, including for recognition, segmentation, tracking and visualization.
However, despite their popularity, these algorithms suffer from a major
limitation known as the "repeated Eigen-directions" phenomenon. That is, many
of the embedding coordinates they produce typically capture the same direction
along the data manifold. This leads to redundant and inefficient
representations that do not reveal the true intrinsic dimensionality of the
data. In this paper, we propose a general method for avoiding redundancy in
spectral algorithms. Our approach relies on replacing the orthogonality
constraints underlying those methods by unpredictability constraints.
Specifically, we require that each embedding coordinate be unpredictable (in
the statistical sense) from all previous ones. We prove that these constraints
necessarily prevent redundancy, and provide a simple technique to incorporate
them into existing methods. As we illustrate on challenging high-dimensional
scenarios, our approach produces significantly more informative and compact
representations, which improve visualization and classification tasks
An Evolutionary Learning Approach for Adaptive Negotiation Agents
Developing effective and efficient negotiation mechanisms for real-world applications such as e-Business is challenging since negotiations in such a context are characterised by combinatorially complex negotiation spaces, tough deadlines, very limited information about the opponents, and volatile negotiator preferences. Accordingly, practical negotiation systems should be empowered by effective learning mechanisms to acquire dynamic domain knowledge from the possibly changing negotiation contexts. This paper illustrates our adaptive negotiation agents which are underpinned by robust evolutionary learning mechanisms to deal with complex and dynamic negotiation contexts. Our experimental results show that GA-based adaptive negotiation agents outperform a theoretically optimal negotiation mechanism which guarantees Pareto optimal. Our research work opens the door to the development of practical negotiation systems for real-world applications
Level-Based Analysis of Genetic Algorithms and Other Search Processes
The fitness-level technique is a simple and old way to derive upper bounds for the expected runtime of simple elitist evolutionary algorithms (EAs). Recently, the technique has been adapted to deduce the runtime of algorithms with non-elitist populations and unary variation operators [2,8]. In this paper, we show that the restriction to unary variation operators can be removed. This gives rise to a much more general analytical tool which is applicable to a wide range of search processes. As introductory examples, we provide simple runtime analyses of many variants of the Genetic Algorithm on well-known benchmark functions, such as OneMax, LeadingOnes, and the sorting problem
Flower pollination algorithm: a novel approach for multiobjective optimization
Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed
- …