Location of Repository

Summary: In this lecture we see two nonembeddability result for ℓ1. The first result introduces an example of a ℓ2 2 metric which does not embed with distortion 16 15 − ɛ into ℓ1. The second example shows that the edit distance on the hypercube {0, 1} n does not embed into ℓ1 with distortion better than Ω(log n). The proof uses the celebrated inequality of KKL. 1 Tensoring the cube In this section we use tensoring of the cube to construct an ℓ 2 2 metric which is not ℓ1 [HMM06]. There is an ℓ 2 2 metric space due to Khot and Vishnoi [KV05] which requires distortion Ω(log log n) to be embedded into ℓ1, but the proof of that theorem is very complicated (see [KR06] for the Ω(log log n) bound). For two vectors u ∈ R n and v ∈ R m, their tensor product u ⊗ v is a vector in R mn defined with coordinates indexed by ordered pairs (i, j) ∈ [n]×[m] that assumes value uivj on coordinate (i, j). For example: (1, 2) ⊗ (1, 2, 3) = (1, 2, 3, 2, 4, 6). Tensor product behaves nicely with respect to the direct product: Let u, u ′ ∈ R n and v, v ′ ∈ R n, then 〈u ⊗ v, u ′ ⊗ v ′ 〉 = 〈u, u ′ 〉〈v, v ′ 〉. (1) To prove (1) note that 〈u ⊗ v, u ′ ⊗ v ′ 〉 = n� m� i=1 j=1 uivju ′ iv ′ j = ( n� uiu ′ m� i)( i=1 j=1 vjv ′ j) = 〈u, u ′ 〉〈v, v ′ 〉. Consider the hypercube {−1, 1} n, and the mapping f: u → u ⊗ u. Note that f maps the vertices of {−1, 1} n to the vertices of the larger hypercube {−1, 1} n2 (why?)

Year: 2009

OAI identifier:
oai:CiteSeerX.psu:10.1.1.135.1901

Provided by:
CiteSeerX

Download PDF:To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.