1,318 research outputs found
Fast Structuring of Radio Networks for Multi-Message Communications
We introduce collision free layerings as a powerful way to structure radio
networks. These layerings can replace hard-to-compute BFS-trees in many
contexts while having an efficient randomized distributed construction. We
demonstrate their versatility by using them to provide near optimal distributed
algorithms for several multi-message communication primitives.
Designing efficient communication primitives for radio networks has a rich
history that began 25 years ago when Bar-Yehuda et al. introduced fast
randomized algorithms for broadcasting and for constructing BFS-trees. Their
BFS-tree construction time was rounds, where is the network
diameter and is the number of nodes. Since then, the complexity of a
broadcast has been resolved to be rounds. On the other hand, BFS-trees have been used as a crucial building
block for many communication primitives and their construction time remained a
bottleneck for these primitives.
We introduce collision free layerings that can be used in place of BFS-trees
and we give a randomized construction of these layerings that runs in nearly
broadcast time, that is, w.h.p. in rounds for any constant . We then use these
layerings to obtain: (1) A randomized algorithm for gathering messages
running w.h.p. in rounds. (2) A randomized -message
broadcast algorithm running w.h.p. in rounds. These
algorithms are optimal up to the small difference in the additive
poly-logarithmic term between and . Moreover, they imply the
first optimal round randomized gossip algorithm
Wiener Reconstruction of Large-Scale Structure from Peculiar Velocities
We present an alternative, Bayesian method for large-scale reconstruction
from observed peculiar velocity data. The method stresses a rigorous treatment
of the random errors and it allows extrapolation into poorly sampled regions in
real space or in k-space. A likelihood analysis is used to determine the
fluctuation power spectrum, followed by a Wiener Filter (WF) analysis to obtain
the minimum-variance mean fields of velocity and mass density. Constrained
Realizations (CR) are then used to sample the statistical scatter about the WF
mean field. The WF/CR method is applied as a demonstration to the Mark III data
with 1200 km/s, 900 km/s, and 500 km/s resolutions. The main reconstructed
structures are consistent with those extracted by the POTENT method. A
comparison with the structures in the distribution of IRAS 1.2Jy galaxies
yields a general agreement. The reconstructed velocity field is decomposed into
its divergent and tidal components relative to a cube of +/-8000 km/s centered
on the Local Group. The divergent component is very similar to the velocity
field predicted from the distribution of IRAS galaxies. The tidal component is
dominated by a bulk flow of 194 +/- 32 km/s towards the general direction of
the Shapley concentration, and it also indicates a significant quadrupole.Comment: 28 pages and 8 GIF figures, Latex (aasms4.sty), submitted to ApJ.
Postscript version of the figures can be obtained by anonymous ftp from:
ftp://alf.huji.ac.il/pub/saleem
Molecular Dynamics Simulations of Weak Detonations
Detonation of a three-dimensional reactive non-isotropic molecular crystal is
modeled using molecular dynamics simulations. The detonation process is
initiated by an impulse, followed by the creation of a stable fast reactive
shock wave. The terminal shock velocity is independent of the initiation
conditions. Further analysis shows supersonic propagation decoupled from the
dynamics of the decomposed material left behind the shock front. The dependence
of the shock velocity on crystal nonlinear compressibility resembles solitary
behavior. These properties categorize the phenomena as a weak detonation. The
dependence of the detonation wave on microscopic potential parameters was
investigated. An increase in detonation velocity with the reaction
exothermicity reaching a saturation value is observed. In all other respects
the model crystal exhibits typical properties of a molecular crystal.Comment: 38 pages, 20 figures. Submitted to Physical Review
From Local Velocities to Microwave Background
The mass density field as extracted from peculiar velocities in our
cosmological neighborhood is mapped back in time to the CMB in two ways. First,
the density power spectrum () is translated into a temperature angular
power spectrum of sub-degree resolution () and compared to observations.
Second, the local density field is translated into a temperature map in a patch
on the last-scattering surface of a distant observer. A likelihood analysis of
the Mark III peculiar velocity data have constrained the range of parameters
for within the family of COBE-normalized CDM models (Zaroubi et al 1996),
favoring a slight tilt in the initial spectrum, . The corresponding range
of 's is plotted against current observations, indicating that the CMB
data can tighten the constraints further: only models with ``small'' tilt
() and ``high'' baryonic content () could survive
the two data sets simultaneously. The local mass density field that has been
recovered from the velocities via a Wiener method is convolved with a Boltzmann
calculation to recover resolution temperature maps as viewed from
different directions. The extent of the CMB patch and the amplitude of
fluctuations depend on the choice of cosmological parameters, e.g., the local
100\hmpc sphere corresponds to to at the CMB for between
1 and 0 respectively. The phases of the temperature map are correlated with
those of the density field, contrary to the contribution of the Sachs-Wolfe
effect alone. This correlation suggests the possibility of an inverse
reconstruction of the underlying density field from CMB data with interesting
theoretical implications.Comment: 16 pages, 6 figures. Submitted to Ap.
Report of the panel on plate motion and deformation, section 2
Given here is a panel report on the goals and objectives, requirements and recommendations for the investigation of plate motion and deformation. The goals are to refine our knowledge of plate motions, study regional and local deformation, and contribute to the solution of important societal problems. The requirements include basic space-positioning measurements, the use of global and regional data sets obtained with space-based techniques, topographic and geoid data to help characterize the internal processes that shape the planet, gravity data to study the density structure at depth and help determine the driving mechanisms for plate tectonics, and satellite images to map lithology, structure and morphology. The most important recommendation of the panel is for the implementation of a world-wide space-geodetic fiducial network to provide a systematic and uniform measure of global strain
Universal mean moment rate profiles of earthquake ruptures
Earthquake phenomenology exhibits a number of power law distributions
including the Gutenberg-Richter frequency-size statistics and the Omori law for
aftershock decay rates. In search for a basic model that renders correct
predictions on long spatio-temporal scales, we discuss results associated with
a heterogeneous fault with long range stress-transfer interactions. To better
understand earthquake dynamics we focus on faults with Gutenberg-Richter like
earthquake statistics and develop two universal scaling functions as a stronger
test of the theory against observations than mere scaling exponents that have
large error bars. Universal shape profiles contain crucial information on the
underlying dynamics in a variety of systems. As in magnetic systems, we find
that our analysis for earthquakes provides a good overall agreement between
theory and observations, but with a potential discrepancy in one particular
universal scaling function for moment-rates. The results reveal interesting
connections between the physics of vastly different systems with avalanche
noise.Comment: 13 pages, 5 figure
Greedy D-Approximation Algorithm for Covering with Arbitrary Constraints and Submodular Cost
This paper describes a simple greedy D-approximation algorithm for any
covering problem whose objective function is submodular and non-decreasing, and
whose feasible region can be expressed as the intersection of arbitrary (closed
upwards) covering constraints, each of which constrains at most D variables of
the problem. (A simple example is Vertex Cover, with D = 2.) The algorithm
generalizes previous approximation algorithms for fundamental covering problems
and online paging and caching problems
The Efficacy of IDegLira (Insulin Degludec/Liraglutide Combination) in Adults with Type 2 Diabetes Inadequately Controlled with a GLP-1 Receptor Agonist and Oral Therapy: DUAL III Randomized Clinical Trial
Conductance distribution between Hall plateaus
Mesoscopic fluctuations of two-port conductance and four-port resistance
between Hall plateaus are studied within a realistic model for a
two-dimensional electron gas in a perpendicular magnetic field and a smooth
disordered potential. The two-port conductance distribution is concave
between and and is nearly flat between and . These
characteristics are consistent with recent observations. The distribution is
found to be sharply peaked near the end-points and . The
distribution functions for the three independent resistances in a four-port
Hall bar geometry are, on the other hand, characterized by a central peak and a
relatively large width.Comment: 11 pages, 5 ps figures, submitted to Phys. Rev.
LP-based Covering Games with Low Price of Anarchy
We present a new class of vertex cover and set cover games. The price of
anarchy bounds match the best known constant factor approximation guarantees
for the centralized optimization problems for linear and also for submodular
costs -- in contrast to all previously studied covering games, where the price
of anarchy cannot be bounded by a constant (e.g. [6, 7, 11, 5, 2]). In
particular, we describe a vertex cover game with a price of anarchy of 2. The
rules of the games capture the structure of the linear programming relaxations
of the underlying optimization problems, and our bounds are established by
analyzing these relaxations. Furthermore, for linear costs we exhibit linear
time best response dynamics that converge to these almost optimal Nash
equilibria. These dynamics mimic the classical greedy approximation algorithm
of Bar-Yehuda and Even [3]
- …