164 research outputs found
Parallel Exhaustive Search without Coordination
We analyze parallel algorithms in the context of exhaustive search over
totally ordered sets. Imagine an infinite list of "boxes", with a "treasure"
hidden in one of them, where the boxes' order reflects the importance of
finding the treasure in a given box. At each time step, a search protocol
executed by a searcher has the ability to peek into one box, and see whether
the treasure is present or not. By equally dividing the workload between them,
searchers can find the treasure times faster than one searcher.
However, this straightforward strategy is very sensitive to failures (e.g.,
crashes of processors), and overcoming this issue seems to require a large
amount of communication. We therefore address the question of designing
parallel search algorithms maximizing their speed-up and maintaining high
levels of robustness, while minimizing the amount of resources for
coordination. Based on the observation that algorithms that avoid communication
are inherently robust, we analyze the best running time performance of
non-coordinating algorithms. Specifically, we devise non-coordinating
algorithms that achieve a speed-up of for two searchers, a speed-up of
for three searchers, and in general, a speed-up of
for any searchers. Thus, asymptotically, the speed-up is only four
times worse compared to the case of full-coordination, and our algorithms are
surprisingly simple and hence applicable. Moreover, these bounds are tight in a
strong sense as no non-coordinating search algorithm can achieve better
speed-ups. Overall, we highlight that, in faulty contexts in which coordination
between the searchers is technically difficult to implement, intrusive with
respect to privacy, and/or costly in term of resources, it might well be worth
giving up on coordination, and simply run our non-coordinating exhaustive
search algorithms
Distributed Deterministic Broadcasting in Uniform-Power Ad Hoc Wireless Networks
Development of many futuristic technologies, such as MANET, VANET, iThings,
nano-devices, depend on efficient distributed communication protocols in
multi-hop ad hoc networks. A vast majority of research in this area focus on
design heuristic protocols, and analyze their performance by simulations on
networks generated randomly or obtained in practical measurements of some
(usually small-size) wireless networks. %some library. Moreover, they often
assume access to truly random sources, which is often not reasonable in case of
wireless devices. In this work we use a formal framework to study the problem
of broadcasting and its time complexity in any two dimensional Euclidean
wireless network with uniform transmission powers. For the analysis, we
consider two popular models of ad hoc networks based on the
Signal-to-Interference-and-Noise Ratio (SINR): one with opportunistic links,
and the other with randomly disturbed SINR. In the former model, we show that
one of our algorithms accomplishes broadcasting in rounds, where
is the number of nodes and is the diameter of the network. If nodes
know a priori the granularity of the network, i.e., the inverse of the
maximum transmission range over the minimum distance between any two stations,
a modification of this algorithm accomplishes broadcasting in
rounds.
Finally, we modify both algorithms to make them efficient in the latter model
with randomly disturbed SINR, with only logarithmic growth of performance.
Ours are the first provably efficient and well-scalable, under the two
models, distributed deterministic solutions for the broadcast task.Comment: arXiv admin note: substantial text overlap with arXiv:1207.673
Interval Selection in the Streaming Model
A set of intervals is independent when the intervals are pairwise disjoint.
In the interval selection problem we are given a set of intervals
and we want to find an independent subset of intervals of largest cardinality.
Let denote the cardinality of an optimal solution. We
discuss the estimation of in the streaming model, where we
only have one-time, sequential access to the input intervals, the endpoints of
the intervals lie in , and the amount of the memory is
constrained.
For intervals of different sizes, we provide an algorithm in the data stream
model that computes an estimate of that, with
probability at least , satisfies . For same-length
intervals, we provide another algorithm in the data stream model that computes
an estimate of that, with probability at
least , satisfies . The space used by our algorithms is bounded
by a polynomial in and . We also show that no better
estimations can be achieved using bits of storage.
We also develop new, approximate solutions to the interval selection problem,
where we want to report a feasible solution, that use
space. Our algorithms for the interval selection problem match the optimal
results by Emek, Halld{\'o}rsson and Ros{\'e}n [Space-Constrained Interval
Selection, ICALP 2012], but are much simpler.Comment: Minor correction
Hazardous waste management problem: The case for incineration
We define the hazardous waste management problem as the combined decisions of selecting the disposal method, siting the disposal plants and deciding on the waste flow structure. The hazardous waste management problem has additional requirements depending on the selected disposal method. In this paper we focus on incineration, for which the main additional requirement is to satisfy the air pollution standards imposed by the governmental restrictions. We propose a cost-based mathematical model in which the satisfaction of air pollution standards is also incorporated. We used the Gaussian Plume equation in measuring the air pollution concentrations at population centers. A large-scale implementation of the proposed model within Turkey is provided. © 2005 Elsevier Ltd. All rights reserved
Truthful Facility Assignment with Resource Augmentation: An Exact Analysis of Serial Dictatorship
We study the truthful facility assignment problem, where a set of agents with
private most-preferred points on a metric space are assigned to facilities that
lie on the metric space, under capacity constraints on the facilities. The goal
is to produce such an assignment that minimizes the social cost, i.e., the
total distance between the most-preferred points of the agents and their
corresponding facilities in the assignment, under the constraint of
truthfulness, which ensures that agents do not misreport their most-preferred
points.
We propose a resource augmentation framework, where a truthful mechanism is
evaluated by its worst-case performance on an instance with enhanced facility
capacities against the optimal mechanism on the same instance with the original
capacities. We study a very well-known mechanism, Serial Dictatorship, and
provide an exact analysis of its performance. Although Serial Dictatorship is a
purely combinatorial mechanism, our analysis uses linear programming; a linear
program expresses its greedy nature as well as the structure of the input, and
finds the input instance that enforces the mechanism have its worst-case
performance. Bounding the objective of the linear program using duality
arguments allows us to compute tight bounds on the approximation ratio. Among
other results, we prove that Serial Dictatorship has approximation ratio
when the capacities are multiplied by any integer . Our
results suggest that even a limited augmentation of the resources can have
wondrous effects on the performance of the mechanism and in particular, the
approximation ratio goes to 1 as the augmentation factor becomes large. We
complement our results with bounds on the approximation ratio of Random Serial
Dictatorship, the randomized version of Serial Dictatorship, when there is no
resource augmentation
Online Makespan Minimization with Parallel Schedules
In online makespan minimization a sequence of jobs
has to be scheduled on identical parallel machines so as to minimize the
maximum completion time of any job. We investigate the problem with an
essentially new model of resource augmentation. Here, an online algorithm is
allowed to build several schedules in parallel while processing . At
the end of the scheduling process the best schedule is selected. This model can
be viewed as providing an online algorithm with extra space, which is invested
to maintain multiple solutions. The setting is of particular interest in
parallel processing environments where each processor can maintain a single or
a small set of solutions.
We develop a (4/3+\eps)-competitive algorithm, for any 0<\eps\leq 1, that
uses a number of 1/\eps^{O(\log (1/\eps))} schedules. We also give a
(1+\eps)-competitive algorithm, for any 0<\eps\leq 1, that builds a
polynomial number of (m/\eps)^{O(\log (1/\eps) / \eps)} schedules. This value
depends on but is independent of the input . The performance
guarantees are nearly best possible. We show that any algorithm that achieves a
competitiveness smaller than 4/3 must construct schedules. Our
algorithms make use of novel guessing schemes that (1) predict the optimum
makespan of a job sequence to within a factor of 1+\eps and (2)
guess the job processing times and their frequencies in . In (2) we
have to sparsify the universe of all guesses so as to reduce the number of
schedules to a constant.
The competitive ratios achieved using parallel schedules are considerably
smaller than those in the standard problem without resource augmentation
Bragg spectroscopy of a superfluid Bose-Hubbard gas
Bragg spectroscopy is used to measure excitations of a trapped,
quantum-degenerate gas of 87Rb atoms in a 3-dimensional optical lattice. The
measurements are carried out over a range of optical lattice depths in the
superfluid phase of the Bose-Hubbard model. For fixed wavevector, the resonant
frequency of the excitation is found to decrease with increasing lattice depth.
A numerical calculation of the resonant frequencies based on Bogoliubov theory
shows a less steep rate of decrease than the measurements.Comment: 11 pages, 4 figure
Pathway Commons, a web resource for biological pathway data
Pathway Commons (http://www.pathwaycommons.org) is a collection of publicly available pathway data from multiple organisms. Pathway Commons provides a web-based interface that enables biologists to browse and search a comprehensive collection of pathways from multiple sources represented in a common language, a download site that provides integrated bulk sets of pathway information in standard or convenient formats and a web service that software developers can use to conveniently query and access all data. Database providers can share their pathway data via a common repository. Pathways include biochemical reactions, complex assembly, transport and catalysis events and physical interactions involving proteins, DNA, RNA, small molecules and complexes. Pathway Commons aims to collect and integrate all public pathway data available in standard formats. Pathway Commons currently contains data from nine databases with over 1400 pathways and 687 000 interactions and will be continually expanded and updated
A Match in Time Saves Nine: Deterministic Online Matching With Delays
We consider the problem of online Min-cost Perfect Matching with Delays
(MPMD) introduced by Emek et al. (STOC 2016). In this problem, an even number
of requests appear in a metric space at different times and the goal of an
online algorithm is to match them in pairs. In contrast to traditional online
matching problems, in MPMD all requests appear online and an algorithm can
match any pair of requests, but such decision may be delayed (e.g., to find a
better match). The cost is the sum of matching distances and the introduced
delays.
We present the first deterministic online algorithm for this problem. Its
competitive ratio is , where is the
number of requests. This is polynomial in the number of metric space points if
all requests are given at different points. In particular, the bound does not
depend on other parameters of the metric, such as its aspect ratio. Unlike
previous (randomized) solutions for the MPMD problem, our algorithm does not
need to know the metric space in advance
Spray-on organic/inorganic films: a general method for the formation of functional nano- to microscale coatings.
journal article2010 Dec 27importe
- …