2,342 research outputs found

    Sum of Two Squares - Pair Correlation and Distribution in Short Intervals

    Full text link
    In this work we show that based on a conjecture for the pair correlation of integers representable as sums of two squares, which was first suggested by Connors and Keating and reformulated here, the second moment of the distribution of the number of representable integers in short intervals is consistent with a Poissonian distribution, where "short" means of length comparable to the mean spacing between sums of two squares. In addition we present a method for producing such conjectures through calculations in prime power residue rings and describe how these conjectures, as well as the above stated result, may by generalized to other binary quadratic forms. While producing these pair correlation conjectures we arrive at a surprising result regarding Mertens' formula for primes in arithmetic progressions, and in order to test the validity of the conjectures, we present numericalz computations which support our approach.Comment: 3 figure

    Search for methylamine in high mass hot cores

    Get PDF
    We aim to detect methylamine, CH3_{3}NH2_{2}, in a variety of hot cores and use it as a test for the importance of photon-induced chemistry in ice mantles and mobility of radicals. Specifically, CH3_3NH2_2 cannot be formed from atom addition to CO whereas other NH2_2-containing molecules such as formamide, NH2_2CHO, can. Submillimeter spectra of several massive hot core regions were taken with the James Clerk Maxwell Telescope. Abundances are determined with the rotational diagram method where possible. Methylamine is not detected, giving upper limit column densities between 1.9 - 6.4 ×\times 1016^{16} cm2^{-2} for source sizes corresponding to the 100 K envelope radius. Combined with previously obtained JCMT data analyzed in the same way, abundance ratios of CH3_{3}NH2_{2}, NH2_{2}CHO and CH3_{3}CN with respect to each other and to CH3_{3}OH are determined. These ratios are compared with Sagittarius B2 observations, where all species are detected, and to hot core models. The observed ratios suggest that both methylamine and formamide are overproduced by up to an order of magnitude in hot core models. Acetonitrile is however underproduced. The proposed chemical schemes leading to these molecules are discussed and reactions that need further laboratory studies are identified. The upper limits obtained in this paper can be used to guide future observations, especially with ALMA.Comment: 14 pages, 5 figures, accepted for publication in A&

    Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders

    Full text link
    Generative models that learn disentangled representations for different factors of variation in an image can be very useful for targeted data augmentation. By sampling from the disentangled latent subspace of interest, we can efficiently generate new data necessary for a particular task. Learning disentangled representations is a challenging problem, especially when certain factors of variation are difficult to label. In this paper, we introduce a novel architecture that disentangles the latent space into two complementary subspaces by using only weak supervision in form of pairwise similarity labels. Inspired by the recent success of cycle-consistent adversarial architectures, we use cycle-consistency in a variational auto-encoder framework. Our non-adversarial approach is in contrast with the recent works that combine adversarial training with auto-encoders to disentangle representations. We show compelling results of disentangled latent subspaces on three datasets and compare with recent works that leverage adversarial training

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    Approximate Bayesian Image Interpretation using Generative Probabilistic Graphics Programs

    Get PDF
    The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement. Instead, most vision tasks are approached via complex bottom-up processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics programs consist of a stochastic scene generator, a renderer based on graphics software, a stochastic likelihood model linking the renderer's output and the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood model. Representations and algorithms from computer graphics, originally designed to produce high-quality images, are instead used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer graphics, and approximate Bayesian computation, and depends only on general-purpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured alphanumeric characters, and inferring 3D road models from vehicle-mounted camera images. Each of the probabilistic graphics programs we present relies on under 20 lines of probabilistic code, and supports accurate, approximately Bayesian inferences about ambiguous real-world images.Comment: The first two authors contributed equally to this wor

    Research in interactive scene analysis

    Get PDF
    An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples

    Self-Supervised Intrinsic Image Decomposition

    Full text link
    Intrinsic decomposition from a single image is a highly challenging task, due to its inherent ambiguity and the scarcity of training data. In contrast to traditional fully supervised learning approaches, in this paper we propose learning intrinsic image decomposition by explaining the input image. Our model, the Rendered Intrinsics Network (RIN), joins together an image decomposition pipeline, which predicts reflectance, shape, and lighting conditions given a single image, with a recombination function, a learned shading model used to recompose the original input based off of intrinsic image predictions. Our network can then use unsupervised reconstruction error as an additional signal to improve its intermediate representations. This allows large-scale unlabeled data to be useful during training, and also enables transferring learned knowledge to images of unseen object categories, lighting conditions, and shapes. Extensive experiments demonstrate that our method performs well on both intrinsic image decomposition and knowledge transfer.Comment: NIPS 2017 camera-ready version, project page: http://rin.csail.mit.edu

    Concepts in a Probabilistic Language of Thought

    Get PDF
    Note: The book chapter is reprinted courtesy of The MIT Press, from the forthcoming edited collection “The Conceptual Mind: New Directions in the Study of Concepts” edited by Eric Margolis and Stephen Laurence, print date Spring 2015.Knowledge organizes our understanding of the world, determining what we expect given what we have already seen. Our predictive representations have two key properties: they are productive, and they are graded. Productive generalization is possible because our knowledge decomposes into concepts—elements of knowledge that are combined and recombined to describe particular situations. Gradedness is the observable effect of accounting for uncertainty—our knowledge encodes degrees of belief that lead to graded probabilistic predictions. To put this a different way, concepts form a combinatorial system that enables description of many different situations; each such situation specifies a distribution over what we expect to see in the world, given what we have seen. We may think of this system as a probabilistic language of thought (PLoT) in which representations are built from language-like composition of concepts and the content of those representations is a probability distribution on world states. The purpose of this chapter is to formalize these ideas in computational terms, to illustrate key properties of the PLoT approach with a concrete example, and to draw connections with other views of conceptual structure.This work was supported by ONR awards N00014-09-1-0124 and N00014-13- 1-0788, by a John S. McDonnell Foundation Scholar Award, and by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF - 1231216
    corecore