74,932 research outputs found

    Unsupervised Generative Adversarial Cross-modal Hashing

    Full text link
    Cross-modal hashing aims to map heterogeneous multimedia data into a common Hamming space, which can realize fast and flexible retrieval across different modalities. Unsupervised cross-modal hashing is more flexible and applicable than supervised methods, since no intensive labeling work is involved. However, existing unsupervised methods learn hashing functions by preserving inter and intra correlations, while ignoring the underlying manifold structure across different modalities, which is extremely helpful to capture meaningful nearest neighbors of different modalities for cross-modal retrieval. To address the above problem, in this paper we propose an Unsupervised Generative Adversarial Cross-modal Hashing approach (UGACH), which makes full use of GAN's ability for unsupervised representation learning to exploit the underlying manifold structure of cross-modal data. The main contributions can be summarized as follows: (1) We propose a generative adversarial network to model cross-modal hashing in an unsupervised fashion. In the proposed UGACH, given a data of one modality, the generative model tries to fit the distribution over the manifold structure, and select informative data of another modality to challenge the discriminative model. The discriminative model learns to distinguish the generated data and the true positive data sampled from correlation graph to achieve better retrieval accuracy. These two models are trained in an adversarial way to improve each other and promote hashing function learning. (2) We propose a correlation graph based approach to capture the underlying manifold structure across different modalities, so that data of different modalities but within the same manifold can have smaller Hamming distance and promote retrieval accuracy. Extensive experiments compared with 6 state-of-the-art methods verify the effectiveness of our proposed approach.Comment: 8 pages, accepted by 32th AAAI Conference on Artificial Intelligence (AAAI), 201

    Regular Tessellation Link Complements

    Full text link
    By regular tessellation, we mean any hyperbolic 3-manifold tessellated by ideal Platonic solids such that the symmetry group acts transitively on oriented flags. A regular tessellation has an invariant we call the cusp modulus. For small cusp modulus, we classify all regular tessellations. For large cusp modulus, we prove that a regular tessellations has to be infinite volume if its fundamental group is generated by peripheral curves only. This shows that there are at least 19 and at most 21 link complements that are regular tessellations (computer experiments suggest that at least one of the two remaining cases likely fails to be a link complement, but so far we have no proof). In particular, we complete the classification of all principal congruence link complements given in Baker and Reid for the cases of discriminant D=-3 and D=-4. We only describe the manifolds arising as complements of links here with a future publication "Regular Tessellation Links" giving explicit pictures of these links.Comment: 35 pages, 19 figures, 4 tables; version 2: minor chages; fixed title in arxiv's metadata; version3: addresses referee's comments, in particular, rewrite of discussion section; including ancillary file

    Optimizing the double description method for normal surface enumeration

    Full text link
    Many key algorithms in 3-manifold topology involve the enumeration of normal surfaces, which is based upon the double description method for finding the vertices of a convex polytope. Typically we are only interested in a small subset of these vertices, thus opening the way for substantial optimization. Here we give an account of the vertex enumeration problem as it applies to normal surfaces, and present new optimizations that yield strong improvements in both running time and memory consumption. The resulting algorithms are tested using the freely available software package Regina.Comment: 27 pages, 12 figures; v2: Removed the 3^n bound from Section 3.3, fixed the projective equation in Lemma 4.4, clarified "most triangulations" in the introduction to section 5; v3: replace -ise with -ize for Mathematics of Computation (note that this changes the title of the paper

    Simplicial Nonlinear Principal Component Analysis

    Get PDF
    We present a new manifold learning algorithm that takes a set of data points lying on or near a lower dimensional manifold as input, possibly with noise, and outputs a simplicial complex that fits the data and the manifold. We have implemented the algorithm in the case where the input data can be triangulated. We provide triangulations of data sets that fall on the surface of a torus, sphere, swiss roll, and creased sheet embedded in a fifty dimensional space. We also discuss the theoretical justification of our algorithm.Comment: 21 pages, 6 figure
    corecore