23 research outputs found

    Towards a better approximation for sparsest cut?

    Full text link
    We give a new (1+ϵ)(1+\epsilon)-approximation for sparsest cut problem on graphs where small sets expand significantly more than the sparsest cut (sets of size n/rn/r expand by a factor lognlogr\sqrt{\log n\log r} bigger, for some small rr; this condition holds for many natural graph families). We give two different algorithms. One involves Guruswami-Sinop rounding on the level-rr Lasserre relaxation. The other is combinatorial and involves a new notion called {\em Small Set Expander Flows} (inspired by the {\em expander flows} of ARV) which we show exists in the input graph. Both algorithms run in time 2O(r)poly(n)2^{O(r)} \mathrm{poly}(n). We also show similar approximation algorithms in graphs with genus gg with an analogous local expansion condition. This is the first algorithm we know of that achieves (1+ϵ)(1+\epsilon)-approximation on such general family of graphs

    On non-linear network embedding methods

    Get PDF
    As a linear method, spectral clustering is the only network embedding algorithm that offers both a provably fast computation and an advanced theoretical understanding. The accuracy of spectral clustering depends on the Cheeger ratio defined as the ratio between the graph conductance and the 2nd smallest eigenvalue of its normalizedLaplacian. In several graph families whose Cheeger ratio reaches its upper bound of Theta(n), the approximation power of spectral clustering is proven to perform poorly. Moreover, recent non-linear network embedding methods have surpassed spectral clustering by state-of-the-art performance with little to no theoretical understanding to back them. The dissertation includes work that: (1) extends the theory of spectral clustering in order to address its weakness and provide ground for a theoretical understanding of existing non-linear network embedding methods.; (2) provides non-linear extensions of spectral clustering with theoretical guarantees, e.g., via different spectral modification algorithms; (3) demonstrates the potentials of this approach on different types and sizes of graphs from industrial applications; and (4)makes a theory-informed use of artificial networks

    Pythagorean powers of hypercubes

    Get PDF
    For nNn\in \mathbb{N} consider the nn-dimensional hypercube as equal to the vector space F2n\mathbb{F}_2^n, where F2\mathbb{F}_2 is the field of size two. Endow F2n\mathbb{F}_2^n with the Hamming metric, i.e., with the metric induced by the 1n\ell_1^n norm when one identifies F2n\mathbb{F}_2^n with {0,1}nRn\{0,1\}^n\subseteq \mathbb{R}^n. Denote by 2n(F2n)\ell_2^n(\mathbb{F}_2^n) the nn-fold Pythagorean product of F2n\mathbb{F}_2^n, i.e., the space of all x=(x1,,xn)j=1nF2nx=(x_1,\ldots,x_n)\in \prod_{j=1}^n \mathbb{F}_2^n, equipped with the metric x,yj=1nF2n,d2n(F2n)(x,y)=x1y112++xnyn12. \forall\, x,y\in \prod_{j=1}^n \mathbb{F}_2^n,\qquad d_{\ell_2^n(\mathbb{F}_2^n)}(x,y)= \sqrt{ \|x_1-y_1\|_1^2+\ldots+\|x_n-y_n\|_1^2}. It is shown here that the bi-Lipschitz distortion of any embedding of 2n(F2n)\ell_2^n(\mathbb{F}_2^n) into L1L_1 is at least a constant multiple of n\sqrt{n}. This is achieved through the following new bi-Lipschitz invariant, which is a metric version of (a slight variant of) a linear inequality of Kwapie{\'n} and Sch\"utt (1989). Letting {ejk}j,k{1,,n}\{e_{jk}\}_{j,k\in \{1,\ldots,n\}} denote the standard basis of the space of all nn by nn matrices Mn(F2)M_n(\mathbb{F}_2), say that a metric space (X,dX)(X,d_X) is a KS space if there exists C=C(X)>0C=C(X)>0 such that for every n2Nn\in 2\mathbb{N}, every mapping f:Mn(F2)Xf:M_n(\mathbb{F}_2)\to X satisfies \begin{equation*}\label{eq:metric KS abstract} \frac{1}{n}\sum_{j=1}^n\mathbb{E}\left[d_X\Big(f\Big(x+\sum_{k=1}^ne_{jk}\Big),f(x)\Big)\right]\le C \mathbb{E}\left[d_X\Big(f\Big(x+\sum_{j=1}^ne_{jk_j}\Big),f(x)\Big)\right], \end{equation*} where the expectations above are with respect to xMn(F2)x\in M_n(\mathbb{F}_2) and k=(k1,,kn){1,,n}nk=(k_1,\ldots,k_n)\in \{1,\ldots,n\}^n chosen uniformly at random. It is shown here that L1L_1 is a KS space (with C=2e2/(e21)C= 2e^2/(e^2-1), which is best possible), implying the above nonembeddability statement. Links to the Ribe program are discussed, as well as related open problems.Comment: added section

    Algorithms for partitioning well-clustered graphs

    Get PDF
    corecore