29,875 research outputs found

    Separation dimension of bounded degree graphs

    Full text link
    The 'separation dimension' of a graph GG is the smallest natural number kk for which the vertices of GG can be embedded in Rk\mathbb{R}^k such that any pair of disjoint edges in GG can be separated by a hyperplane normal to one of the axes. Equivalently, it is the smallest possible cardinality of a family F\mathcal{F} of total orders of the vertices of GG such that for any two disjoint edges of GG, there exists at least one total order in F\mathcal{F} in which all the vertices in one edge precede those in the other. In general, the maximum separation dimension of a graph on nn vertices is Θ(log⁡n)\Theta(\log n). In this article, we focus on bounded degree graphs and show that the separation dimension of a graph with maximum degree dd is at most 29log⋆dd2^{9log^{\star} d} d. We also demonstrate that the above bound is nearly tight by showing that, for every dd, almost all dd-regular graphs have separation dimension at least ⌈d/2⌉\lceil d/2\rceil.Comment: One result proved in this paper is also present in arXiv:1212.675

    Fast Robust PCA on Graphs

    Get PDF
    Mining useful clusters from high dimensional data has received significant attention of the computer vision and pattern recognition community in the recent years. Linear and non-linear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), non-convexity (for matrix factorization methods) and susceptibility to gross corruptions in the data. In this paper we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient and scalable for huge datasets with O(nlog(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data

    On the number of types in sparse graphs

    Full text link
    We prove that for every class of graphs C\mathcal{C} which is nowhere dense, as defined by Nesetril and Ossona de Mendez, and for every first order formula ϕ(xˉ,yˉ)\phi(\bar x,\bar y), whenever one draws a graph G∈CG\in \mathcal{C} and a subset of its nodes AA, the number of subsets of A∣yˉ∣A^{|\bar y|} which are of the form {vˉ∈A∣yˉ∣  ⁣: G⊚ϕ(uˉ,vˉ)}\{\bar v\in A^{|\bar y|}\, \colon\, G\models\phi(\bar u,\bar v)\} for some valuation uˉ\bar u of xˉ\bar x in GG is bounded by O(∣A∣∣xˉ∣+Ï”)\mathcal{O}(|A|^{|\bar x|+\epsilon}), for every Ï”>0\epsilon>0. This provides optimal bounds on the VC-density of first-order definable set systems in nowhere dense graph classes. We also give two new proofs of upper bounds on quantities in nowhere dense classes which are relevant for their logical treatment. Firstly, we provide a new proof of the fact that nowhere dense classes are uniformly quasi-wide, implying explicit, polynomial upper bounds on the functions relating the two notions. Secondly, we give a new combinatorial proof of the result of Adler and Adler stating that every nowhere dense class of graphs is stable. In contrast to the previous proofs of the above results, our proofs are completely finitistic and constructive, and yield explicit and computable upper bounds on quantities related to uniform quasi-wideness (margins) and stability (ladder indices)

    Syntactic Separation of Subset Satisfiability Problems

    Get PDF
    Variants of the Exponential Time Hypothesis (ETH) have been used to derive lower bounds on the time complexity for certain problems, so that the hardness results match long-standing algorithmic results. In this paper, we consider a syntactically defined class of problems, and give conditions for when problems in this class require strongly exponential time to approximate to within a factor of (1-epsilon) for some constant epsilon > 0, assuming the Gap Exponential Time Hypothesis (Gap-ETH), versus when they admit a PTAS. Our class includes a rich set of problems from additive combinatorics, computational geometry, and graph theory. Our hardness results also match the best known algorithmic results for these problems
    • 

    corecore