358 research outputs found

    Hardness of Easy Problems: Basing Hardness on Popular Conjectures such as the Strong Exponential Time Hypothesis (Invited Talk)

    Get PDF
    Algorithmic research strives to develop fast algorithms for fundamental problems. Despite its many successes, however, many problems still do not have very efficient algorithms. For years researchers have explained the hardness for key problems by proving NP-hardness, utilizing polynomial time reductions to base the hardness of key problems on the famous conjecture P != NP. For problems that already have polynomial time algorithms, however, it does not seem that one can show any sort of hardness based on P != NP. Nevertheless, we would like to provide evidence that a problem AA with a running time O(n^k) that has not been improved in decades, also requires n^{k-o(1)} time, thus explaining the lack of progress on the problem. Such unconditional time lower bounds seem very difficult to obtain, unfortunately. Recent work has concentrated on an approach mimicking NP-hardness: (1) select a few key problems that are conjectured to require T(n) time to solve, (2) use special, fine-grained reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of the key problems. In this abstract we outline the approach, give some examples of hardness results based on the Strong Exponential Time Hypothesis, and present an overview of some of the recent work on the topic

    Sixth Biennial Report : August 2001 - May 2003

    No full text

    The Power and Limitations of Uniform Samples in Testing Properties of Figures

    Get PDF
    We investigate testing of properties of 2-dimensional figures that consist of a black object on a white background. Given a parameter epsilon in (0,1/2), a tester for a specified property has to accept with probability at least 2/3 if the input figure satisfies the property and reject with probability at least 2/3 if it does not. In general, property testers can query the color of any point in the input figure. We study the power of testers that get access only to uniform samples from the input figure. We show that for the property of being a half-plane, the uniform testers are as powerful as general testers: they require only O(1/epsilon) samples. In contrast, we prove that convexity can be tested with O(1/epsilon) queries by testers that can make queries of their choice while uniform testers for this property require Omega(1/epsilon^{5/4}) samples. Previously, the fastest known tester for convexity needed Theta(1/epsilon^{4/3}) queries

    On Problems Equivalent to (min,+)-Convolution

    Get PDF
    In the recent years, significant progress has been made in explaining apparent hardness of improving over naive solutions for many fundamental polynomially solvable problems. This came in the form of conditional lower bounds -- reductions from a problem assumed to be hard. These include 3SUM, All-Pairs Shortest Paths, SAT and Orthogonal Vectors, and others. In the (min,+)-convolution problem, the goal is to compute a sequence c, where c[k] = min_i a[i]+b[k-i], given sequences a and b. This can easily be done in O(n^2) time, but no O(n^{2-eps}) algorithm is known for eps > 0. In this paper we undertake a systematic study of the (min,+)-convolution problem as a hardness assumption. As the first step, we establish equivalence of this problem to a group of other problems, including variants of the classic knapsack problem and problems related to subadditive sequences. The (min,+)-convolution has been used as a building block in algorithms for many problems, notably problems in stringology. It has also already appeared as an ad hoc hardness assumption. We investigate some of these connections and provide new reductions and other results

    Improved Explicit Data Structures in the Bit-Probe Model Using Error-Correcting Codes

    Get PDF
    We consider the bit-probe complexity of the set membership problem: represent an n-element subset S of an m-element universe as a succinct bit vector so that membership queries of the form "Is x ? S" can be answered using at most t probes into the bit vector. Let s(m,n,t) (resp. s_N(m,n,t)) denote the minimum number of bits of storage needed when the probes are adaptive (resp. non-adaptive). Lewenstein, Munro, Nicholson, and Raman (ESA 2014) obtain fully-explicit schemes that show that s(m,n,t) = ?((2^t-1)m^{1/(t - min{2?log n?, n-3/2})}) for n ? 2,t ? ?log n?+1 . In this work, we improve this bound when the probes are allowed to be superlinear in n, i.e., when t ? ?(nlog n), n ? 2, we design fully-explicit schemes that show that s(m,n,t) = ?((2^t-1)m^{1/(t-{n-1}/{2^{t/(2(n-1))}})}), asymptotically (in the exponent of m) close to the non-explicit upper bound on s(m,n,t) derived by Radhakrishan, Shah, and Shannigrahi (ESA 2010), for constant n. In the non-adaptive setting, it was shown by Garg and Radhakrishnan (STACS 2017) that for a large constant n?, for n ? n?, s_N(m,n,3) ? ?{mn}. We improve this result by showing that the same lower bound holds even for storing sets of size 2, i.e., s_N(m,2,3) ? ?(?m)
    corecore