49,603 research outputs found
Min-Max Theorems for Packing and Covering Odd -trails
We investigate the problem of packing and covering odd -trails in a
graph. A -trail is a -walk that is allowed to have repeated
vertices but no repeated edges. We call a trail odd if the number of edges in
the trail is odd. Let denote the maximum number of edge-disjoint odd
-trails, and denote the minimum size of an edge-set that
intersects every odd -trail.
We prove that . Our result is tight---there are
examples showing that ---and substantially improves upon
the bound of obtained in [Churchley et al 2016] for .
Our proof also yields a polynomial-time algorithm for finding a cover and a
collection of trails satisfying the above bounds.
Our proof is simple and has two main ingredients. We show that (loosely
speaking) the problem can be reduced to the problem of packing and covering odd
-trails losing a factor of 2 (either in the number of trails found, or
the size of the cover). Complementing this, we show that the
odd--trail packing and covering problems can be tackled by exploiting
a powerful min-max result of [Chudnovsky et al 2006] for packing
vertex-disjoint nonzero -paths in group-labeled graphs
High performance communication subsystem for clustering standard high-volume servers using Gigabit Ethernet
This paper presents an efficient communication subsystem, DP-II, for clustering standard high-volume (SHV) servers using Gigabit Ethernet. The DP-II employs several lightweight messaging mechanisms to achieve low-latency and high-bandwidth communication. The test shows an 18.32 us single-trip latency and 72.8 MB/s bandwidth on a Gigabit Ethernet network for connecting two Dell PowerEdge 6300 Quad Xeon SMP servers running Linux. To improve the programmability of the DP-II communication subsystem, the development of DP-II was based on a concise yet powerful abstract communication model, Directed Point Model, which can be conveniently used to depict the inter-process communication pattern of a parallel task in the cluster environment. In addition, the API of DP-II preserves the syntax and semantics of traditional UNIX I/O operations, which make it easy to use.published_or_final_versio
Simultaneous temperature and refractive index measurements using a 3°slanted multimode fiber Bragg grating
Author name used in this publication: M. S. DemokanAuthor name used in this publication: W. Jin2005-2006 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe
Mimimal Length Uncertainty Principle and the Transplanckian Problem of Black Hole Physics
The minimal length uncertainty principle of Kempf, Mangano and Mann (KMM), as
derived from a mutilated quantum commutator between coordinate and momentum, is
applied to describe the modes and wave packets of Hawking particles evaporated
from a black hole. The transplanckian problem is successfully confronted in
that the Hawking particle no longer hugs the horizon at arbitrarily close
distances. Rather the mode of Schwarzschild frequency deviates from
the conventional trajectory when the coordinate is given by in units of the non local distance legislated
into the uncertainty relation. Wave packets straddle the horizon and spread out
to fill the whole non local region. The charge carried by the packet (in the
sense of the amount of "stuff" carried by the Klein--Gordon field) is not
conserved in the non--local region and rapidly decreases to zero as time
decreases. Read in the forward temporal direction, the non--local region thus
is the seat of production of the Hawking particle and its partner. The KMM
model was inspired by string theory for which the mutilated commutator has been
proposed to describe an effective theory of high momentum scattering of zero
mass modes. It is here interpreted in terms of dissipation which gives rise to
the Hawking particle into a reservoir of other modes (of as yet unknown
origin). On this basis it is conjectured that the Bekenstein--Hawking entropy
finds its origin in the fluctuations of fields extending over the non local
region.Comment: 12 pages (LateX), 1 figur
Chinese herbal medicine in the treatment of acute upper respiratory tract infection: a randomised, double blind, placebo-controlled clinical trial.
published_or_final_versio
Inbuilt Mechanisms for Overcoming Functional Problems Inherent in Hepatic Microlobular Structure
This paper is funded by an MRC/EPSRC Discipline Bridging Initiative Grant (G0502256-77947) to W. Wan
Explosion Mechanisms of Core-Collapse Supernovae
Supernova theory, numerical and analytic, has made remarkable progress in the
past decade. This progress was made possible by more sophisticated simulation
tools, especially for neutrino transport, improved microphysics, and deeper
insights into the role of hydrodynamic instabilities. Violent, large-scale
nonradial mass motions are generic in supernova cores. The neutrino-heating
mechanism, aided by nonradial flows, drives explosions, albeit low-energy ones,
of ONeMg-core and some Fe-core progenitors. The characteristics of the neutrino
emission from new-born neutron stars were revised, new features of the
gravitational-wave signals were discovered, our notion of supernova
nucleosynthesis was shattered, and our understanding of pulsar kicks and
explosion asymmetries was significantly improved. But simulations also suggest
that neutrino-powered explosions might not explain the most energetic
supernovae and hypernovae, which seem to demand magnetorotational driving. Now
that modeling is being advanced from two to three dimensions, more realism, new
perspectives, and hopefully answers to long-standing questions are coming into
reach.Comment: 35 pages, 11 figures (29 eps files; high-quality versions can be
obtained upon request); accepted by Annual Review of Nuclear and Particle
Scienc
Temperature-insensitive interferometer using a highly birefringent photonic crystal fiber loop mirror
Author name used in this publication: M. S. Demokan2004-2005 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe
Run Generation Revisited: What Goes Up May or May Not Come Down
In this paper, we revisit the classic problem of run generation. Run
generation is the first phase of external-memory sorting, where the objective
is to scan through the data, reorder elements using a small buffer of size M ,
and output runs (contiguously sorted chunks of elements) that are as long as
possible.
We develop algorithms for minimizing the total number of runs (or
equivalently, maximizing the average run length) when the runs are allowed to
be sorted or reverse sorted. We study the problem in the online setting, both
with and without resource augmentation, and in the offline setting.
(1) We analyze alternating-up-down replacement selection (runs alternate
between sorted and reverse sorted), which was studied by Knuth as far back as
1963. We show that this simple policy is asymptotically optimal. Specifically,
we show that alternating-up-down replacement selection is 2-competitive and no
deterministic online algorithm can perform better.
(2) We give online algorithms having smaller competitive ratios with resource
augmentation. Specifically, we exhibit a deterministic algorithm that, when
given a buffer of size 4M , is able to match or beat any optimal algorithm
having a buffer of size M . Furthermore, we present a randomized online
algorithm which is 7/4-competitive when given a buffer twice that of the
optimal.
(3) We demonstrate that performance can also be improved with a small amount
of foresight. We give an algorithm, which is 3/2-competitive, with
foreknowledge of the next 3M elements of the input stream. For the extreme case
where all future elements are known, we design a PTAS for computing the optimal
strategy a run generation algorithm must follow.
(4) Finally, we present algorithms tailored for nearly sorted inputs which
are guaranteed to have optimal solutions with sufficiently long runs
Adaptive live VM migration over a WAN: modeling and implementation
Recent advances in virtualization technology enable high mobility of virtual machines and resource provisioning at the data-center level. To streamline the migration process, various migration strategies have been proposed for VM live migration over a local-area network (LAN). The most common solution uses memory pre-copying and assumes the storage is shared on the LAN. While applied to a wide-area network (WAN), the VM live migration algorithms need a new design philosophy to address the challenges of long latency, limited bandwidth, unstable network conditions and the movement of storage. This paper proposes a three-phase fractional hybrid pre-copy and post-copy solution for both memory and storage to achieve highly adaptive migration over a WAN. In this hybrid solution, we selectively migrate an important fraction of memory and storage in the pre-copy and freeze-and-copy phase, while the rest (non-critical data set) is migrated during post-copying. We propose a new metric called performance restoration agility, which considers both the downtime and the VM speed degradation during the post-copy phase, to evaluate the migration process. We also develop a profiling framework and a novel probabilistic prediction model to adaptively find a predictably optimal combination of the memory and storage fractions to migrate. This model-based hybrid solution is implemented on Xen and evaluated in an emulated WAN environment. Experimental results show that our solution wins over all others in adaptiveness for various applications over a WAN, while retaining the responsiveness of post-copy algorithms.published_or_final_versio
- …
