161 research outputs found

    Parallel Exhaustive Search without Coordination

    Get PDF
    We analyze parallel algorithms in the context of exhaustive search over totally ordered sets. Imagine an infinite list of "boxes", with a "treasure" hidden in one of them, where the boxes' order reflects the importance of finding the treasure in a given box. At each time step, a search protocol executed by a searcher has the ability to peek into one box, and see whether the treasure is present or not. By equally dividing the workload between them, kk searchers can find the treasure kk times faster than one searcher. However, this straightforward strategy is very sensitive to failures (e.g., crashes of processors), and overcoming this issue seems to require a large amount of communication. We therefore address the question of designing parallel search algorithms maximizing their speed-up and maintaining high levels of robustness, while minimizing the amount of resources for coordination. Based on the observation that algorithms that avoid communication are inherently robust, we analyze the best running time performance of non-coordinating algorithms. Specifically, we devise non-coordinating algorithms that achieve a speed-up of 9/89/8 for two searchers, a speed-up of 4/34/3 for three searchers, and in general, a speed-up of k4(1+1/k)2\frac{k}{4}(1+1/k)^2 for any k1k\geq 1 searchers. Thus, asymptotically, the speed-up is only four times worse compared to the case of full-coordination, and our algorithms are surprisingly simple and hence applicable. Moreover, these bounds are tight in a strong sense as no non-coordinating search algorithm can achieve better speed-ups. Overall, we highlight that, in faulty contexts in which coordination between the searchers is technically difficult to implement, intrusive with respect to privacy, and/or costly in term of resources, it might well be worth giving up on coordination, and simply run our non-coordinating exhaustive search algorithms

    Distributed Deterministic Broadcasting in Uniform-Power Ad Hoc Wireless Networks

    Full text link
    Development of many futuristic technologies, such as MANET, VANET, iThings, nano-devices, depend on efficient distributed communication protocols in multi-hop ad hoc networks. A vast majority of research in this area focus on design heuristic protocols, and analyze their performance by simulations on networks generated randomly or obtained in practical measurements of some (usually small-size) wireless networks. %some library. Moreover, they often assume access to truly random sources, which is often not reasonable in case of wireless devices. In this work we use a formal framework to study the problem of broadcasting and its time complexity in any two dimensional Euclidean wireless network with uniform transmission powers. For the analysis, we consider two popular models of ad hoc networks based on the Signal-to-Interference-and-Noise Ratio (SINR): one with opportunistic links, and the other with randomly disturbed SINR. In the former model, we show that one of our algorithms accomplishes broadcasting in O(Dlog2n)O(D\log^2 n) rounds, where nn is the number of nodes and DD is the diameter of the network. If nodes know a priori the granularity gg of the network, i.e., the inverse of the maximum transmission range over the minimum distance between any two stations, a modification of this algorithm accomplishes broadcasting in O(Dlogg)O(D\log g) rounds. Finally, we modify both algorithms to make them efficient in the latter model with randomly disturbed SINR, with only logarithmic growth of performance. Ours are the first provably efficient and well-scalable, under the two models, distributed deterministic solutions for the broadcast task.Comment: arXiv admin note: substantial text overlap with arXiv:1207.673

    Interval Selection in the Streaming Model

    Full text link
    A set of intervals is independent when the intervals are pairwise disjoint. In the interval selection problem we are given a set I\mathbb{I} of intervals and we want to find an independent subset of intervals of largest cardinality. Let α(I)\alpha(\mathbb{I}) denote the cardinality of an optimal solution. We discuss the estimation of α(I)\alpha(\mathbb{I}) in the streaming model, where we only have one-time, sequential access to the input intervals, the endpoints of the intervals lie in {1,...,n}\{1,...,n \}, and the amount of the memory is constrained. For intervals of different sizes, we provide an algorithm in the data stream model that computes an estimate α^\hat\alpha of α(I)\alpha(\mathbb{I}) that, with probability at least 2/32/3, satisfies 12(1ε)α(I)α^α(I)\tfrac 12(1-\varepsilon) \alpha(\mathbb{I}) \le \hat\alpha \le \alpha(\mathbb{I}). For same-length intervals, we provide another algorithm in the data stream model that computes an estimate α^\hat\alpha of α(I)\alpha(\mathbb{I}) that, with probability at least 2/32/3, satisfies 23(1ε)α(I)α^α(I)\tfrac 23(1-\varepsilon) \alpha(\mathbb{I}) \le \hat\alpha \le \alpha(\mathbb{I}). The space used by our algorithms is bounded by a polynomial in ε1\varepsilon^{-1} and logn\log n. We also show that no better estimations can be achieved using o(n)o(n) bits of storage. We also develop new, approximate solutions to the interval selection problem, where we want to report a feasible solution, that use O(α(I))O(\alpha(\mathbb{I})) space. Our algorithms for the interval selection problem match the optimal results by Emek, Halld{\'o}rsson and Ros{\'e}n [Space-Constrained Interval Selection, ICALP 2012], but are much simpler.Comment: Minor correction

    Hazardous waste management problem: The case for incineration

    Get PDF
    We define the hazardous waste management problem as the combined decisions of selecting the disposal method, siting the disposal plants and deciding on the waste flow structure. The hazardous waste management problem has additional requirements depending on the selected disposal method. In this paper we focus on incineration, for which the main additional requirement is to satisfy the air pollution standards imposed by the governmental restrictions. We propose a cost-based mathematical model in which the satisfaction of air pollution standards is also incorporated. We used the Gaussian Plume equation in measuring the air pollution concentrations at population centers. A large-scale implementation of the proposed model within Turkey is provided. © 2005 Elsevier Ltd. All rights reserved

    Truthful Facility Assignment with Resource Augmentation: An Exact Analysis of Serial Dictatorship

    Full text link
    We study the truthful facility assignment problem, where a set of agents with private most-preferred points on a metric space are assigned to facilities that lie on the metric space, under capacity constraints on the facilities. The goal is to produce such an assignment that minimizes the social cost, i.e., the total distance between the most-preferred points of the agents and their corresponding facilities in the assignment, under the constraint of truthfulness, which ensures that agents do not misreport their most-preferred points. We propose a resource augmentation framework, where a truthful mechanism is evaluated by its worst-case performance on an instance with enhanced facility capacities against the optimal mechanism on the same instance with the original capacities. We study a very well-known mechanism, Serial Dictatorship, and provide an exact analysis of its performance. Although Serial Dictatorship is a purely combinatorial mechanism, our analysis uses linear programming; a linear program expresses its greedy nature as well as the structure of the input, and finds the input instance that enforces the mechanism have its worst-case performance. Bounding the objective of the linear program using duality arguments allows us to compute tight bounds on the approximation ratio. Among other results, we prove that Serial Dictatorship has approximation ratio g/(g2)g/(g-2) when the capacities are multiplied by any integer g3g \geq 3. Our results suggest that even a limited augmentation of the resources can have wondrous effects on the performance of the mechanism and in particular, the approximation ratio goes to 1 as the augmentation factor becomes large. We complement our results with bounds on the approximation ratio of Random Serial Dictatorship, the randomized version of Serial Dictatorship, when there is no resource augmentation

    Online Makespan Minimization with Parallel Schedules

    Full text link
    In online makespan minimization a sequence of jobs σ=J1,...,Jn\sigma = J_1,..., J_n has to be scheduled on mm identical parallel machines so as to minimize the maximum completion time of any job. We investigate the problem with an essentially new model of resource augmentation. Here, an online algorithm is allowed to build several schedules in parallel while processing σ\sigma. At the end of the scheduling process the best schedule is selected. This model can be viewed as providing an online algorithm with extra space, which is invested to maintain multiple solutions. The setting is of particular interest in parallel processing environments where each processor can maintain a single or a small set of solutions. We develop a (4/3+\eps)-competitive algorithm, for any 0<\eps\leq 1, that uses a number of 1/\eps^{O(\log (1/\eps))} schedules. We also give a (1+\eps)-competitive algorithm, for any 0<\eps\leq 1, that builds a polynomial number of (m/\eps)^{O(\log (1/\eps) / \eps)} schedules. This value depends on mm but is independent of the input σ\sigma. The performance guarantees are nearly best possible. We show that any algorithm that achieves a competitiveness smaller than 4/3 must construct Ω(m)\Omega(m) schedules. Our algorithms make use of novel guessing schemes that (1) predict the optimum makespan of a job sequence σ\sigma to within a factor of 1+\eps and (2) guess the job processing times and their frequencies in σ\sigma. In (2) we have to sparsify the universe of all guesses so as to reduce the number of schedules to a constant. The competitive ratios achieved using parallel schedules are considerably smaller than those in the standard problem without resource augmentation

    Bragg spectroscopy of a superfluid Bose-Hubbard gas

    Full text link
    Bragg spectroscopy is used to measure excitations of a trapped, quantum-degenerate gas of 87Rb atoms in a 3-dimensional optical lattice. The measurements are carried out over a range of optical lattice depths in the superfluid phase of the Bose-Hubbard model. For fixed wavevector, the resonant frequency of the excitation is found to decrease with increasing lattice depth. A numerical calculation of the resonant frequencies based on Bogoliubov theory shows a less steep rate of decrease than the measurements.Comment: 11 pages, 4 figure

    Pathway Commons, a web resource for biological pathway data

    Get PDF
    Pathway Commons (http://www.pathwaycommons.org) is a collection of publicly available pathway data from multiple organisms. Pathway Commons provides a web-based interface that enables biologists to browse and search a comprehensive collection of pathways from multiple sources represented in a common language, a download site that provides integrated bulk sets of pathway information in standard or convenient formats and a web service that software developers can use to conveniently query and access all data. Database providers can share their pathway data via a common repository. Pathways include biochemical reactions, complex assembly, transport and catalysis events and physical interactions involving proteins, DNA, RNA, small molecules and complexes. Pathway Commons aims to collect and integrate all public pathway data available in standard formats. Pathway Commons currently contains data from nine databases with over 1400 pathways and 687 000 interactions and will be continually expanded and updated

    A Match in Time Saves Nine: Deterministic Online Matching With Delays

    Full text link
    We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) introduced by Emek et al. (STOC 2016). In this problem, an even number of requests appear in a metric space at different times and the goal of an online algorithm is to match them in pairs. In contrast to traditional online matching problems, in MPMD all requests appear online and an algorithm can match any pair of requests, but such decision may be delayed (e.g., to find a better match). The cost is the sum of matching distances and the introduced delays. We present the first deterministic online algorithm for this problem. Its competitive ratio is O(mlog25.5)O(m^{\log_2 5.5}) =O(m2.46) = O(m^{2.46}), where 2m2 m is the number of requests. This is polynomial in the number of metric space points if all requests are given at different points. In particular, the bound does not depend on other parameters of the metric, such as its aspect ratio. Unlike previous (randomized) solutions for the MPMD problem, our algorithm does not need to know the metric space in advance
    corecore