10,123 research outputs found
Optimization flow control -- I: Basic algorithm and convergence
We propose an optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates. We view network links and sources as processors of a distributed computation system to solve the dual problem using a gradient projection algorithm. In this system, sources select transmission rates that maximize their own benefits, utility minus bandwidth cost, and network links adjust bandwidth prices to coordinate the sources' decisions. We allow feedback delays to be different, substantial, and time varying, and links and sources to update at different times and with different frequencies. We provide asynchronous distributed algorithms and prove their convergence in a static environment. We present measurements obtained from a preliminary prototype to illustrate the convergence of the algorithm in a slowly time-varying environment. We discuss its fairness property
Presymmetry beyond the Standard Model
We go beyond the Standard Model guided by presymmetry, the discrete
electroweak quark-lepton symmetry hidden by topological effects which explain
quark fractional charges as in condense matter physics. Partners of the
particles of the Standard Model and the discrete symmetry associated with this
partnership appear as manifestations of a residual presymmetry and its
extension from matter to forces. This duplication of the spectrum of the
Standard Model keeps spin and comes nondegenerated about the TeV scale.Comment: 6 pages, 11 figures. To be published in the proceedings of DPF-2009,
Detroit, MI, July 2009, eConf C09072
Integrating heterogeneous distributed COTS discrete-event simulation packages: An emerging standards-based approach
This paper reports on the progress made toward the emergence of standards to support the integration of heterogeneous discrete-event simulations (DESs) created in specialist support tools called commercial-off-the-shelf (COTS) discrete-event simulation packages (CSPs). The general standard for heterogeneous integration in this area has been developed from research in distributed simulation and is the IEEE 1516 standard The High Level Architecture (HLA). However, the specific needs of heterogeneous CSP integration require that the HLA is augmented by additional complementary standards. These are the suite of CSP interoperability (CSPI) standards being developed under the Simulation Interoperability Standards Organization (SISO-http://www.sisostds.org) by the CSPI Product Development Group (CSPI-PDG). The suite consists of several interoperability reference models (IRMs) that outline different integration needs of CSPI, interoperability frameworks (IFs) that define the HLA-based solution to each IRM, appropriate data exchange representations to specify the data exchanged in an IF, and benchmarks termed CSP emulators (CSPEs). This paper contributes to the development of the Type I IF that is intended to represent the HLA-based solution to the problem outlined by the Type I IRM (asynchronous entity passing) by developing the entity transfer specification (ETS) data exchange representation. The use of the ETS in an illustrative case study implemented using a prototype CSPE is shown. This case study also allows us to highlight the importance of event granularity and lookahead in the performance and development of the Type I IF, and to discuss possible methods to automate the capture of appropriate values of lookahead
COMPLETE SOLUTION OF THE XXZ-MODEL ON FINITE RINGS. DYNAMICAL STRUCTURE FACTORS AT ZERO TEMPERATURE.
The finite size effects of the dynamical structure factors in the XXZ-model
are studied in the euclidean time -representation. Away from the
critical momentum finite size effects turn out to be small except for
the large limit. The large finite size effects at the critical momentum
signal the emergence of infrared singularities in the spectral
-representation of the dynamical structure factors.Comment: PostScript file with 12 pages + 11 figures uuencoded compresse
Entropy on the von Neumann lattice and its evaluation
Based on the recently introduced averaging procedure in phase space, a new
type of entropy is defined on the von Neumann lattice. This quantity can be
interpreted as a measure of uncertainty associated with simultaneous
measurement of the position and momentum observables in the discrete subset of
the phase space. Evaluating for a class of the coherent states, it is shown
that this entropy takes a stationary value for the ground state, modulo a unit
cell of the lattice in such a class. This value for the ground state depends on
the ratio of the position lattice spacing and the momentum lattice spacing. It
is found that its minimum is realized for the perfect square lattice, i.e.,
absence of squeezing. Numerical evaluation of this minimum gives 1.386....Comment: 14 pages, no figures; J. Phys. A, in pres
Phenomenology of the Littlest Higgs with T-Parity
Little Higgs models offer an interesting approach to weakly coupled
electroweak symmetry breaking without fine tuning. The original little Higgs
models were plagued by strong constraints from electroweak precision data which
required a fine tuning to be reintroduced. An economical solution to this
problem is to introduce a discrete symmetry (analogous to R-parity of SUSY)
called T-parity. T-parity not only eliminates most constraints from electroweak
precision data, but it also leads to a promising dark matter candidate. In this
paper we investigate the dark matter candidate in the littlest Higgs model with
T-parity. We find bounds on the symmetry breaking scale f as a function of the
Higgs mass by calculating the relic density. We begin the study of the LHC
phenomenology of the littlest Higgs model with T-parity. We find that the model
offers an interesting collider signature that has a generic missing energy
signal which could "fake" SUSY at the LHC. We also investigate the properties
of the heavy partner of the top quark which is common to all littlest Higgs
models, and how its properties are modified with the introduction of T-parity.
We include an appendix with a list of Feynman rules specific to the littlest
Higgs with T-parity to facilitate further study.Comment: 32 pages, 8 figures; dark matter bounds revised; comphep model files
made publicly available at http://www.lns.cornell.edu/public/theory/tparity
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
There has been significant recent interest in parallel graph processing due
to the need to quickly analyze the large graphs available today. Many graph
codes have been designed for distributed memory or external memory. However,
today even the largest publicly-available real-world graph (the Hyperlink Web
graph with over 3.5 billion vertices and 128 billion edges) can fit in the
memory of a single commodity multicore server. Nevertheless, most experimental
work in the literature report results on much smaller graphs, and the ones for
the Hyperlink graph use distributed or external memory. Therefore, it is
natural to ask whether we can efficiently solve a broad class of graph problems
on this graph in memory.
This paper shows that theoretically-efficient parallel graph algorithms can
scale to the largest publicly-available graphs using a single machine with a
terabyte of RAM, processing them in minutes. We give implementations of
theoretically-efficient parallel algorithms for 20 important graph problems. We
also present the optimizations and techniques that we used in our
implementations, which were crucial in enabling us to process these large
graphs quickly. We show that the running times of our implementations
outperform existing state-of-the-art implementations on the largest real-world
graphs. For many of the problems that we consider, this is the first time they
have been solved on graphs at this scale. We have made the implementations
developed in this work publicly-available as the Graph-Based Benchmark Suite
(GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
NICMOS Imaging of the Nuclei of Arp 220
We report high resolution imaging of the ultraluminous infrared galaxy Arp
220 at 1.1, 1.6, and 2.22 microns with NICMOS on the HST. The
diffraction-limited images at 0.1--0.2 arcsecond resolution clearly resolve
both nuclei of the merging galaxy system and reveal for the first time a number
of luminous star clusters in the circumnuclear envelope. The morphologies of
both nuclei are strongly affected by dust obscuration, even at 2.2 microns :
the primary nucleus (west) presents a crescent shape, concave to the south and
the secondary (eastern) nucleus is bifurcated by a dust lane with the southern
component being very reddened. In the western nucleus, the morphology of the
2.2 micron emission is most likely the result of obscuration by an opaque disk
embedded within the nuclear star cluster. The morphology of the central
starburst-cluster in the western nucleus is consistent with either a
circumnuclear ring of star formation or a spherical cluster with the bottom
half obscured by the embedded dust disk. Comparison of cm-wave radio continuum
maps with the near-infrared images suggests that the radio nuclei lie in the
dust disk on the west and near the highly reddened southern component of the
eastern complex. The radio nuclei are separated by 0.98 arcseconds
(corresponding to 364 pc at 77 Mpc) and the half-widths of the infrared nuclei
are approximately 0.2-0.5 arcseconds. At least 8, unresolved infrared sources
-- probably globular clusters -- are also seen in the circumnuclear envelope at
radii 2-7 arcseconds . Their near-infrared colors do not significantly
constrain their ages.Comment: LaTex, 15 pages with 1 gif figure and 5 postscript figures. ApJL
accepte
MIPS: The Multiband Imaging Photometer for SIRTF
The Multiband Imaging Photometer for SIRTF (MIPS) is to be designed to reach as closely as possible the fundamental sensitivity and angular resolution limits for SIRTF over the 3 to 700μm spectral region. It will use high performance photoconductive detectors from 3 to 200μm with integrating JFET amplifiers. From 200 to 700μm, the MIPS will use a bolometer cooled by an adiabatic demagnetization refrigerator. Over much of its operating range, the MIPS will make possible observations at and beyond the conventional Rayleigh diffraction limit of angular resolution
The twin paradox and Mach's principle
The problem of absolute motion in the context of the twin paradox is
discussed. It is shown that the various versions of the clock paradox feature
some aspects which Mach might have been appreciated. However, the ultimate
cause of the behavior of the clocks must be attributed to the autonomous status
of spacetime, thereby proving the relational program advocated by Mach as
impracticable.Comment: Latex2e, 11 pages, 6 figures, 33 references, no tables. Accepted for
publication in The European Physical Journal PLUS (EPJ PLUS
- …