5,154 research outputs found

    Small gaps between products of two primes

    Get PDF
    Let qnq_n denote the nthn^{th} number that is a product of exactly two distinct primes. We prove that lim infn(qn+1qn)6.\liminf_{n\to \infty} (q_{n+1}-q_n) \le 6. This sharpens an earlier result of the authors (arXivMath NT/0506067), which had 26 in place of 6. More generally, we prove that if ν\nu is any positive integer, then lim infn(qn+νqn)C(ν)=νeνγ(1+o(1)). \liminf_{n\to \infty} (q_{n+\nu}-q_n) \le C(\nu) = \nu e^{\nu-\gamma} (1+o(1)). We also prove several other results on the representation of numbers with exactly two prime factors by linear forms.Comment: 11N25 (primary) 11N36 (secondary

    Unusual structural tuning of magnetism in cuprate perovskites

    Full text link
    Understanding the structural underpinnings of magnetism is of great fundamental and practical interest. Se_{1-x}Te_{x}CuO_{3} alloys are model systems for the study of this question, as composition-induced structural changes control their magnetic interactions. Our work reveals that this structural tuning is associated with the position of the supposedly dummy atoms Se and Te relative to the super-exchange (SE) Cu--O--Cu paths, and not with the SE angles as previously thought. We use density functional theory, tight-binding, and exact diagonalization methods to unveil the cause of this surprising effect and hint at new ways of engineering magnetic interactions in solids.Comment: 4 pages, with 4 postscript figures embedded. Uses REVTEX4 and graphicx macro

    Self-Supervised Intrinsic Image Decomposition

    Full text link
    Intrinsic decomposition from a single image is a highly challenging task, due to its inherent ambiguity and the scarcity of training data. In contrast to traditional fully supervised learning approaches, in this paper we propose learning intrinsic image decomposition by explaining the input image. Our model, the Rendered Intrinsics Network (RIN), joins together an image decomposition pipeline, which predicts reflectance, shape, and lighting conditions given a single image, with a recombination function, a learned shading model used to recompose the original input based off of intrinsic image predictions. Our network can then use unsupervised reconstruction error as an additional signal to improve its intermediate representations. This allows large-scale unlabeled data to be useful during training, and also enables transferring learned knowledge to images of unseen object categories, lighting conditions, and shapes. Extensive experiments demonstrate that our method performs well on both intrinsic image decomposition and knowledge transfer.Comment: NIPS 2017 camera-ready version, project page: http://rin.csail.mit.edu
    corecore