1,174,530 research outputs found

    Learning Aligned Cross-Modal Representations from Weakly Aligned Data

    Get PDF
    People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for cross-modal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.Comment: Conference paper at CVPR 201

    Systematics of Aligned Axions

    Full text link
    We describe a novel technique that renders theories of NN axions tractable, and more generally can be used to efficiently analyze a large class of periodic potentials of arbitrary dimension. Such potentials are complex energy landscapes with a number of local minima that scales as N!\sqrt{N!}, and so for large NN appear to be analytically and numerically intractable. Our method is based on uncovering a set of approximate symmetries that exist in addition to the NN periods. These approximate symmetries, which are exponentially close to exact, allow us to locate the minima very efficiently and accurately and to analyze other characteristics of the potential. We apply our framework to evaluate the diameters of flat regions suitable for slow-roll inflation, which unifies, corrects and extends several forms of "axion alignment" previously observed in the literature. We find that in a broad class of random theories, the potential is smooth over diameters enhanced by N3/2N^{3/2} compared to the typical scale of the potential. A Mathematica implementation of our framework is available online.Comment: 68 pages, 17 figure

    Aligned Drawings of Planar Graphs

    Get PDF
    Let GG be a graph that is topologically embedded in the plane and let A\mathcal{A} be an arrangement of pseudolines intersecting the drawing of GG. An aligned drawing of GG and A\mathcal{A} is a planar polyline drawing Γ\Gamma of GG with an arrangement AA of lines so that Γ\Gamma and AA are homeomorphic to GG and A\mathcal{A}. We show that if A\mathcal{A} is stretchable and every edge ee either entirely lies on a pseudoline or it has at most one intersection with A\mathcal{A}, then GG and A\mathcal{A} have a straight-line aligned drawing. In order to prove this result, we strengthen a result of Da Lozzo et al., and prove that a planar graph GG and a single pseudoline L\mathcal{L} have an aligned drawing with a prescribed convex drawing of the outer face. We also study the less restrictive version of the alignment problem with respect to one line, where only a set of vertices is given and we need to determine whether they can be collinear. We show that the problem is NP-complete but fixed-parameter tractable.Comment: Preliminary work appeared in the Proceedings of the 25th International Symposium on Graph Drawing and Network Visualization (GD 2017

    Trans-gram, Fast Cross-lingual Word-embeddings

    Full text link
    We introduce Trans-gram, a simple and computationally-efficient method to simultaneously learn and align wordembeddings for a variety of languages, using only monolingual data and a smaller set of sentence-aligned data. We use our new method to compute aligned wordembeddings for twenty-one languages using English as a pivot language. We show that some linguistic features are aligned across languages for which we do not have aligned data, even though those properties do not exist in the pivot language. We also achieve state of the art results on standard cross-lingual text classification and word translation tasks.Comment: EMNLP 201

    Capacity Regions and Sum-Rate Capacities of Vector Gaussian Interference Channels

    Full text link
    The capacity regions of vector, or multiple-input multiple-output, Gaussian interference channels are established for very strong interference and aligned strong interference. Furthermore, the sum-rate capacities are established for Z interference, noisy interference, and mixed (aligned weak/intermediate and aligned strong) interference. These results generalize known results for scalar Gaussian interference channels.Comment: 33 pages, 1 figure, submitted to IEEE trans. on Information theor
    corecore