161,616 research outputs found

    Fast Routing Table Construction Using Small Messages

    Full text link
    We describe a distributed randomized algorithm computing approximate distances and routes that approximate shortest paths. Let n denote the number of nodes in the graph, and let HD denote the hop diameter of the graph, i.e., the diameter of the graph when all edges are considered to have unit weight. Given 0 < eps <= 1/2, our algorithm runs in weak-O(n^(1/2 + eps) + HD) communication rounds using messages of O(log n) bits and guarantees a stretch of O(eps^(-1) log eps^(-1)) with high probability. This is the first distributed algorithm approximating weighted shortest paths that uses small messages and runs in weak-o(n) time (in graphs where HD in weak-o(n)). The time complexity nearly matches the lower bounds of weak-Omega(sqrt(n) + HD) in the small-messages model that hold for stateless routing (where routing decisions do not depend on the traversed path) as well as approximation of the weigthed diameter. Our scheme replaces the original identifiers of the nodes by labels of size O(log eps^(-1) log n). We show that no algorithm that keeps the original identifiers and runs for weak-o(n) rounds can achieve a polylogarithmic approximation ratio. Variations of our techniques yield a number of fast distributed approximation algorithms solving related problems using small messages. Specifically, we present algorithms that run in weak-O(n^(1/2 + eps) + HD) rounds for a given 0 < eps <= 1/2, and solve, with high probability, the following problems: - O(eps^(-1))-approximation for the Generalized Steiner Forest (the running time in this case has an additive weak-O(t^(1 + 2eps)) term, where t is the number of terminals); - O(eps^(-2))-approximation of weighted distances, using node labels of size O(eps^(-1) log n) and weak-O(n^(eps)) bits of memory per node; - O(eps^(-1))-approximation of the weighted diameter; - O(eps^(-3))-approximate shortest paths using the labels 1,...,n.Comment: 40 pages, 2 figures, extended abstract submitted to STOC'1

    Semi-Supervised Sound Source Localization Based on Manifold Regularization

    Full text link
    Conventional speaker localization algorithms, based merely on the received microphone signals, are often sensitive to adverse conditions, such as: high reverberation or low signal to noise ratio (SNR). In some scenarios, e.g. in meeting rooms or cars, it can be assumed that the source position is confined to a predefined area, and the acoustic parameters of the environment are approximately fixed. Such scenarios give rise to the assumption that the acoustic samples from the region of interest have a distinct geometrical structure. In this paper, we show that the high dimensional acoustic samples indeed lie on a low dimensional manifold and can be embedded into a low dimensional space. Motivated by this result, we propose a semi-supervised source localization algorithm which recovers the inverse mapping between the acoustic samples and their corresponding locations. The idea is to use an optimization framework based on manifold regularization, that involves smoothness constraints of possible solutions with respect to the manifold. The proposed algorithm, termed Manifold Regularization for Localization (MRL), is implemented in an adaptive manner. The initialization is conducted with only few labelled samples attached with their respective source locations, and then the system is gradually adapted as new unlabelled samples (with unknown source locations) are received. Experimental results show superior localization performance when compared with a recently presented algorithm based on a manifold learning approach and with the generalized cross-correlation (GCC) algorithm as a baseline

    A vector quantization approach to universal noiseless coding and quantization

    Get PDF
    A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+&epsiv;) when the universe of sources is infinite-dimensional, under appropriate conditions
    • …
    corecore