28 research outputs found

    A strategy for searching with different access costs

    Get PDF
    AbstractLet us consider an ordered set of keys A={a1<⋯<an}, where the probability of searching ai is 1/n, for i=1,…,n. If the cost of testing each key is similar, then the standard binary search is the strategy with minimum expected access cost. However, if the cost of testing ai is ci, for i=1,…,n, then the standard binary search is not necessarily the best strategy.In this paper, we prove that the expected access cost of an optimal search strategy is bounded above by 4Cln(n+1)/n, where C=∑i=1nci. Furthermore, we show that this upper bound is asymptotically tight up to constant factors. The proof of this upper bound is constructive and generates a 4ln(n+1)-approximated algorithm for constructing near-optimal search strategies. This algorithm runs in O(n2) time and requires O(n) space, which can be useful for practical cases, since the best known exact algorithm for this problem runs in O(n3) time and requires O(n2) space

    A randomized competitive algorithm for evaluating priced AND/OR trees

    Get PDF
    AbstractRecently, Charikar et al. investigated the problem of evaluating AND/OR trees, with non-uniform costs on its leaves, from the perspective of the competitive analysis. For an AND/OR tree T they presented a μ(T)-competitive deterministic polynomial time algorithm, where μ(T) is the number of leaves that must be read, in the worst case, in order to determine the value of T. Furthermore, they proved that μ(T) is a lower bound on the deterministic competitiveness, which assures the optimality of their algorithm.The power of randomization in this context has remained as an open question. Here, we take a step towards solving this problem by presenting a 56μ(T)-competitive randomized polynomial time algorithm. This contrasts with the best known lower bound μ(T)/2

    On the star decomposition of a graph: Hardness results and approximation for the max{\textendash}min optimization problem

    No full text
    We study the problem of decomposing a graph into stars so that the minimum size star in the decomposition is as large as possible. Problems of graph decomposition have been actively investigated since the 70's. The question we consider here also combines the structure of a facility location problem (choosing the centres of the stars) with a max-min fairness optimization criterion that has recently received attention in resource allocation problems, e.g., the Santa Claus problem.We focus on computational and algorithmic questions: We show that the problem is hard even in the case of planar graphs of maximum degree not larger than four, and already for decompositions into stars of size at least three. We are able to tightly characterize the boundaries between efficiently solvable instances and hard ones: we show that relaxing any of the conditions in our hardness result (minimum size of the stars or degree of the input graph) makes the problem polynomially solvable.Our complexity result implies also the APX hardness of the problem ruling out any approximation guarantee better than 2/3. We complement this inapproximability result with a 1/2-approximation algorithm. Finally, we give a polynomial time algorithm for trees. A nice property of our algorithms is that they can all be implemented to run in time linear in the size of the input graph. (C) 2020 Published by Elsevier B.V

    Information Theoretical Clustering Is Hard to Approximate

    No full text
    An impurity measures I : R-d bar right arrow R+ is a function that assigns a d-dimensional vector v to a non-negative value I(v) so that the more homogeneous v, with respect to the values of its coordinates, the larger its impurity. A well known example of impurity measures is the entropy impurity. We study the problem of clustering based on the entropy impurity measures. Let V be a collection of n many d-dimensional vectors with non- negative components. Given V and an impurity measure I, the goal is to find a partition P of V into k groups V-1,..., V-k so as to minimize the sum of the impurities of the groups in P, i.e., I(P) = Sigma(k)(i=1) I (Sigma(v is an element of vi) v). Impurity minimization has been widely used as quality assessment measure in probability distribution clustering (KL-divergence) as well as in categorical clustering. However, in contrast to the case of metric based clustering, the current knowledge of impurity measure based clustering in terms of approximation and inapproximability results is very limited. Here, we contribute to change this scenario by proving that the problem of finding a clustering that minimizes the Entropy impurity measure is APX-hard, i.e., there exists a constant epsilon &gt; 0 such that no polynomial time algorithm can guarantee (1+ epsilon)-approximation under the standard complexity hypothesis P not equal NP. The inapproximability holds even when all vectors have the same l(1) norm. This result provides theoretical limitations on the computational efficiency that can be achievable in the quantization of discrete memoryless channels, a problem that has recently attracted significant attention in the signal processing community. In addition, it also solve a question that remained open in previous work on this topic [Chaudhuri and McGregor COLT 08; Ackermann et. al. ECCC 11]
    corecore