130 research outputs found

    On rr-Simple kk-Path

    Full text link
    An rr-simple kk-path is a {path} in the graph of length kk that passes through each vertex at most rr times. The rr-SIMPLE kk-PATH problem, given a graph GG as input, asks whether there exists an rr-simple kk-path in GG. We first show that this problem is NP-Complete. We then show that there is a graph GG that contains an rr-simple kk-path and no simple path of length greater than 4log⁡k/log⁡r4\log k/\log r. So this, in a sense, motivates this problem especially when one's goal is to find a short path that visits many vertices in the graph while bounding the number of visits at each vertex. We then give a randomized algorithm that runs in time poly(n)⋅2O(k⋅log⁡r/r)\mathrm{poly}(n)\cdot 2^{O( k\cdot \log r/r)} that solves the rr-SIMPLE kk-PATH on a graph with nn vertices with one-sided error. We also show that a randomized algorithm with running time poly(n)⋅2(c/2)k/r\mathrm{poly}(n)\cdot 2^{(c/2)k/ r} with c<1c<1 gives a randomized algorithm with running time \poly(n)\cdot 2^{cn} for the Hamiltonian path problem in a directed graph - an outstanding open problem. So in a sense our algorithm is optimal up to an O(log⁡r)O(\log r) factor

    Symmetries in algebraic Property Testing

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 94-100).Modern computational tasks often involve large amounts of data, and efficiency is a very desirable feature of such algorithms. Local algorithms are especially attractive, since they can imply global properties by only inspecting a small window into the data. In Property Testing, a local algorithm should perform the task of distinguishing objects satisfying a given property from objects that require many modifications in order to satisfy the property. A special place in Property Testing is held by algebraic properties: they are some of the first properties to be tested, and have been heavily used in the PCP and LTC literature. We focus on conditions under which algebraic properties are testable, following the general goal of providing a more unified treatment of these properties. In particular, we explore the notion of symmetry in relation to testing, a direction initiated by Kaufman and Sudan. We investigate the interplay between local testing, symmetry and dual structure in linear codes, by showing both positive and negative results. On the negative side, we exhibit a counterexample to a conjecture proposed by Alon, Kaufman, Krivelevich, Litsyn, and Ron aimed at providing general sufficient conditions for testing. We show that a single codeword of small weight in the dual family together with the property of being invariant under a 2-transitive group of permutations do not necessarily imply testing. On the positive side, we exhibit a large class of codes whose duals possess a strong structural property ('the single orbit property'). Namely, they can be specified by a single codeword of small weight and the group of invariances of the code. Hence we show that sparsity and invariance under the affine group of permutations are sufficient conditions for a notion of very structured testing. These findings also reveal a new characterization of the extensively studied BCH codes. As a by-product, we obtain a more explicit description of structured tests for the special family of BCH codes of design distance 5.by Elena Grigorescu.Ph.D

    On Higher-Order Fourier Analysis over Non-Prime Fields

    Get PDF
    The celebrated Weil bound for character sums says that for any low-degree polynomial P and any additive character chi, either chi(P) is a constant function or it is distributed close to uniform. The goal of higher-order Fourier analysis is to understand the connection between the algebraic and analytic properties of polynomials (and functions, generally) at a more detailed level. For instance, what is the tradeoff between the equidistribution of chi(P) and its "structure"? Previously, most of the work in this area was over fields of prime order. We extend the tools of higher-order Fourier analysis to analyze functions over general finite fields. Let K be a field extension of a prime finite field F_p. Our technical results are: 1. If P: K^n -> K is a polynomial of degree |K|^{-s} for some s > 0 and non-trivial additive character chi, then P is a function of O_{d, s}(1) many non-classical polynomials of weight degree < d. The definition of non-classical polynomials over non-prime fields is one of the contributions of this work. 2. Suppose K and F are of bounded order, and let H be an affine subspace of K^n. Then, if P: K^n -> K is a polynomial of degree d that is sufficiently regular, then (P(x): x in H) is distributed almost as uniformly as possible subject to constraints imposed by the degree of P. Such a theorem was previously known for H an affine subspace over a prime field. The tools of higher-order Fourier analysis have found use in different areas of computer science, including list decoding, algorithmic decomposition and testing. Using our new results, we revisit some of these areas. (i) For any fixed finite field K, we show that the list decoding radius of the generalized Reed Muller code over K equals the minimum distance of the code. (ii) For any fixed finite field K, we give a polynomial time algorithm to decide whether a given polynomial P: K^n -> K can be decomposed as a particular composition of lesser degree polynomials. (iii) For any fixed finite field K, we prove that all locally characterized affine-invariant properties of functions f: K^n -> K are testable with one-sided error

    New higher-order basis functions for Curvilinear finite elements

    Get PDF
    The first contribution is a fast calculation method for tetrahedral finite element matrices which is applicable to curvilinear geometries and inhomogeneous material properties. The element matrices are obtained at a low computational cost via scaled additions of universal matrices. The proposed technique is more efficient than competing approaches and provides well-defined lower and upper bounds for the required number of matrices. In the case of tetrahedral H(div) elements, a new set of basis functions is proposed for the mixed-order NĂ©dĂ©lec space. The specialty of the functions is a high level of orthogonality which applies to arbitrary straight-sided tetrahedra. The resulting condition numbers, compared to competing bases, are significantly lower. The remaining contributions concern hexahedral elements, where a new, mixed-order serendipity element is proposed for H(curl)-conforming functions. It allows the construction of a single set of hierarchical basis functions that can also be used to span various other finite element spaces. Therefore, it is possible to use different finite element spaces within the same mesh while maintaining conformity. In the curvilinear case, a special yet versatile way of mesh refinement is proposed along with serendipity basis functions for the interpolation of the geometry. The main advantage of the proposed methods is the resulting algebraic rate of convergence in H(curl)-norm with the least possible number of unknowns.Der erste Beitrag ist eine schnelle Berechnungsmethode von Finite-Elemente-Matrizen fĂŒr Tetraeder, die auf krummlinige Geometrien und inhomogene Materialeigenschaften anwendbar ist. Die Elementmatrizen werden mit geringem Rechenaufwand durch skalierte Addition vorgefertigter Matrizen erstellt. Die vorgeschlagene Methode ist effizienter als vergleichbare AnsĂ€tze und liefert wohldefinierte obere und untere Schranken fĂŒr die Anzahl der benötigten Matrizen. FĂŒr H(div)-konforme Elemente auf Tetraedern werden neue Ansatzfunktionen fĂŒr den NÂŽedÂŽelec-Raum gemischter Ordnung vorgestellt. Die Besonderheit dieser Funktionen ist ein hohes Maß an OrthogonalitĂ€t fĂŒr beliebige geradlinige Tetraeder. Im Vergleich zu anderen Ansatzfunktionen sind die resultierenden Konditionszahlen deutlich kleiner. Die ĂŒbrigen BeitrĂ€ge betreffen Hexaeder, fĂŒr die ein neues Serentipity-Element gemischter Ordnung vorgestellt wird. Es ermöglicht die Konstruktion hierarchischer Ansatzfunktionen, die auch zum Aufspannen anderer Finite-Elemente-RĂ€ume angewandt werden kann. Daher ist es möglich, verschiedene Finite-Elemente-RĂ€ume auf dem gleichen Netz zu verwenden und dabei KonformitĂ€t zu bewahren. FĂŒr den krummlinigen Fall wird eine spezielle aber vielseitige Methode zur Netzverfeinerung mit Serentipity-Ansatzfunktionen zur Interpolation der Geometrie vorgestellt. Der Hauptvorteil der vorgestellten Methoden ist die algebraische Konvergenz in der Norm des H(rot) mit der kleinstmöglichen Anzahl an Unbekannten

    Computational Methods for Computer Vision : Minimal Solvers and Convex Relaxations

    Get PDF
    Robust fitting of geometric models is a core problem in computer vision. The most common approach is to use a hypothesize-and-test framework, such as RANSAC. In these frameworks the model is estimated from as few measurements as possible, which minimizes the risk of selecting corrupted measurements. These estimation problems are called minimal problems, and they can often be formulated as systems of polynomial equations. In this thesis we present new methods for building so-called minimal solvers or polynomial solvers, which are specialized code for solving such systems. On several minimal problems we improve on the state-of-the-art both with respect to numerical stability and execution time.In many computer vision problems low rank matrices naturally occur. The rank can serve as a measure of model complexity and typically a low rank is desired. Optimization problems containing rank penalties or constraints are in general difficult. Recently convex relaxations, such as the nuclear norm, have been used to make these problems tractable. In this thesis we present new convex relaxations for rank-based optimization which avoid drawbacks of previous approaches and provide tighter relaxations. We evaluate our methods on a number of real and synthetic datasets and show state-of-the-art results

    Every locally characterized affine-invariant property is testable

    Full text link
    Let F = F_p for any fixed prime p >= 2. An affine-invariant property is a property of functions on F^n that is closed under taking affine transformations of the domain. We prove that all affine-invariant property having local characterizations are testable. In fact, we show a proximity-oblivious test for any such property P, meaning that there is a test that, given an input function f, makes a constant number of queries to f, always accepts if f satisfies P, and rejects with positive probability if the distance between f and P is nonzero. More generally, we show that any affine-invariant property that is closed under taking restrictions to subspaces and has bounded complexity is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. For example, whether a function is a product of two degree-d polynomials, whether a function splits into a product of d linear polynomials, and whether a function has low rank are all examples of degree-structural properties and are therefore locally characterized. Our results depend on a new Gowers inverse theorem by Tao and Ziegler for low characteristic fields that decomposes any polynomial with large Gowers norm into a function of low-degree non-classical polynomials. We establish a new equidistribution result for high rank non-classical polynomials that drives the proofs of both the testability results and the local characterization of degree-structural properties

    Property Testing with Online Adversaries

    Full text link
    The online manipulation-resilient testing model, proposed by Kalemaj, Raskhodnikova and Varma (ITCS 2022 and Theory of Computing 2023), studies property testing in situations where access to the input degrades continuously and adversarially. Specifically, after each query made by the tester is answered, the adversary can intervene and either erase or corrupt tt data points. In this work, we investigate a more nuanced version of the online model in order to overcome old and new impossibility results for the original model. We start by presenting an optimal tester for linearity and a lower bound for low-degree testing of Boolean functions in the original model. We overcome the lower bound by allowing batch queries, where the tester gets a group of queries answered between manipulations of the data. Our batch size is small enough so that function values for a single batch on their own give no information about whether the function is of low degree. Finally, to overcome the impossibility results of Kalemaj et al. for sortedness and the Lipschitz property of sequences, we extend the model to include t<1t<1, i.e., adversaries that make less than one erasure per query. For sortedness, we characterize the rate of erasures for which online testing can be performed, exhibiting a sharp transition from optimal query complexity to impossibility of testability (with any number of queries). Our online tester works for a general class of local properties of sequences. One feature of our results is that we get new (and in some cases, simpler) optimal algorithms for several properties in the standard property testing model.Comment: To be published in 15th Innovations in Theoretical Computer Science (ITCS 2024
    • 

    corecore