677 research outputs found

    Cosmological Density and Power Spectrum from Peculiar Velocities: Nonlinear Corrections and PCA

    Get PDF
    We allow for nonlinear effects in the likelihood analysis of galaxy peculiar velocities, and obtain ~35%-lower values for the cosmological density parameter Om and the amplitude of mass-density fluctuations. The power spectrum in the linear regime is assumed to be a flat LCDM model (h=0.65, n=1, COBE) with only Om as a free parameter. Since the likelihood is driven by the nonlinear regime, we "break" the power spectrum at k_b=0.2 h/Mpc and fit a power law at k>k_b. This allows for independent matching of the nonlinear behavior and an unbiased fit in the linear regime. The analysis assumes Gaussian fluctuations and errors, and a linear relation between velocity and density. Tests using proper mock catalogs demonstrate a reduced bias and a better fit. We find for the Mark3 and SFI data Om_m=0.32+-0.06 and 0.37+-0.09 respectively, with sigma_8*Om^0.6 = 0.49+-0.06 and 0.63+-0.08, in agreement with constraints from other data. The quoted 90% errors include cosmic variance. The improvement in likelihood due to the nonlinear correction is very significant for Mark3 and moderately so for SFI. When allowing deviations from LCDM, we find an indication for a wiggle in the power spectrum: an excess near k=0.05 and a deficiency at k=0.1 (cold flow). This may be related to the wiggle seen in the power spectrum from redshift surveys and the second peak in the CMB anisotropy. A chi^2 test applied to modes of a Principal Component Analysis (PCA) shows that the nonlinear procedure improves the goodness of fit and reduces a spatial gradient of concern in the linear analysis. The PCA allows addressing spatial features of the data and fine-tuning the theoretical and error models. It shows that the models used are appropriate for the cosmological parameter estimation performed. We address the potential for optimal data compression using PCA.Comment: 18 pages, LaTex, uses emulateapj.sty, ApJ in press (August 10, 2001), improvements to text and figures, updated reference

    Estimating Depth from RGB and Sparse Sensing

    Full text link
    We present a deep model that can accurately produce dense depth maps given an RGB image with known depth at a very sparse set of pixels. The model works simultaneously for both indoor/outdoor scenes and produces state-of-the-art dense depth maps at nearly real-time speeds on both the NYUv2 and KITTI datasets. We surpass the state-of-the-art for monocular depth estimation even with depth values for only 1 out of every ~10000 image pixels, and we outperform other sparse-to-dense depth methods at all sparsity levels. With depth values for 1/256 of the image pixels, we achieve a mean absolute error of less than 1% of actual depth on indoor scenes, comparable to the performance of consumer-grade depth sensor hardware. Our experiments demonstrate that it would indeed be possible to efficiently transform sparse depth measurements obtained using e.g. lower-power depth sensors or SLAM systems into high-quality dense depth maps.Comment: European Conference on Computer Vision (ECCV) 2018. Updated to camera-ready version with additional experiment

    SG-VAE: Scene Grammar Variational Autoencoder to Generate New Indoor Scenes

    Get PDF
    Deep generative models have been used in recent years to learn coherent latent representations in order to synthesize high-quality images. In this work, we propose a neural network to learn a generative model for sampling consistent indoor scene layouts. Our method learns the co-occurrences, and appearance parameters such as shape and pose, for different objects categories through a grammar-based auto-encoder, resulting in a compact and accurate representation for scene layouts. In contrast to existing grammar-based methods with a user-specified grammar, we construct the grammar automatically by extracting a set of production rules on reasoning about object co-occurrences in training data. The extracted grammar is able to represent a scene by an augmented parse tree. The proposed auto-encoder encodes these parse trees to a latent code, and decodes the latent code to a parse tree, thereby ensuring the generated scene is always valid. We experimentally demonstrate that the proposed auto-encoder learns not only to generate valid scenes (i.e. the arrangements and appearances of objects), but it also learns coherent latent representations where nearby latent samples decode to similar scene outputs. The obtained generative model is applicable to several computer vision tasks such as 3D pose and layout estimation from RGB-D data

    SG-VAE: Scene Grammar Variational Autoencoder to generate new indoor scenes

    Get PDF
    Deep generative models have been used in recent years to learn coherent latent representations in order to synthesize high-quality images. In this work, we propose a neural network to learn a generative model for sampling consistent indoor scene layouts. Our method learns the co-occurrences, and appearance parameters such as shape and pose, for different objects categories through a grammar-based auto-encoder, resulting in a compact and accurate representation for scene layouts. In contrast to existing grammar-based methods with a user-specified grammar, we construct the grammar automatically by extracting a set of production rules on reasoning about object co-occurrences in training data. The extracted grammar is able to represent a scene by an augmented parse tree. The proposed auto-encoder encodes these parse trees to a latent code, and decodes the latent code to a parse tree, thereby ensuring the generated scene is always valid. We experimentally demonstrate that the proposed auto-encoder learns not only to generate valid scenes (i.e. the arrangements and appearances of objects), but it also learns coherent latent representations where nearby latent samples decode to similar scene outputs. The obtained generative model is applicable to several computer vision tasks such as 3D pose and layout estimation from RGB-D data

    Phylogeny and classification of novel diversity in Sainouroidea (Cercozoa, Rhizaria) sheds light on a highly diverse and divergent clade

    Get PDF
    Sainouroidea is a molecularly diverse clade of cercozoan flagellates and amoebae in the eukaryotic supergroup Rhizaria. Previous 18S rDNA environmental sequencing of globally collected fecal and soil samples revealed great diversity and high sequence divergence in the Sainouroidea. However, a very limited amount of this diversity has been observed or described. The two described genera of amoebae in this clade are Guttulinopsis, which displays aggregative multicellularity, and Rosculus, which does not. Although the identity of Guttulinopsis is straightforward due to the multicellular fruiting bodies they form, the same is not true for Rosculus, and the actual identity of the original isolate is unclear. Here we isolated amoebae with morphologies like that of Guttulinopsis and Rosculus from many environments and analyzed them using 18S rDNA sequencing, light microscopy, and transmission electron microscopy. We define a molecular species concept for Sainouroidea that resulted in the description of 4 novel genera and 12 novel species of naked amoebae. Aggregative fruiting is restricted to the genus Guttulinopsis, but other than this there is little morphological variation amongst these taxa. Taken together, simple identification of these amoebae is problematic and potentially unresolvable without the 18S rDNA sequence

    Unfolding an Indoor Origami World

    Full text link
    Abstract. In this work, we present a method for single-view reasoning about 3D surfaces and their relationships. We propose the use of mid-level constraints for 3D scene understanding in the form of convex and concave edges and introduce a generic framework capable of incorporat-ing these and other constraints. Our method takes a variety of cues and uses them to infer a consistent interpretation of the scene. We demon-strate improvements over the state-of-the art and produce interpretations of the scene that link large planar surfaces.

    Mass equidistribution of Hilbert modular eigenforms

    Full text link
    Let F be a totally real number field, and let f traverse a sequence of non-dihedral holomorphic eigencuspforms on GL(2)/F of weight (k_1,...,k_n), trivial central character and full level. We show that the mass of f equidistributes on the Hilbert modular variety as max(k_1,...,k_n) tends to infinity. Our result answers affirmatively a natural analogue of a conjecture of Rudnick and Sarnak (1994). Our proof generalizes the argument of Holowinsky-Soundararajan (2008) who established the case F = Q. The essential difficulty in doing so is to adapt Holowinsky's bounds for the Weyl periods of the equidistribution problem in terms of manageable shifted convolution sums of Fourier coefficients to the case of a number field with nontrivial unit group.Comment: 40 pages; typos corrected, nearly accepted for

    OmniDepth: Dense Depth Estimation for Indoors Spherical Panoramas.

    Get PDF
    Recent work on depth estimation up to now has only focused on projective images ignoring 360o content which is now increasingly and more easily produced. We show that monocular depth estimation models trained on traditional images produce sub-optimal results on omnidirectional images, showcasing the need for training directly on 360o datasets, which however, are hard to acquire. In this work, we circumvent the challenges associated with acquiring high quality 360o datasets with ground truth depth annotations, by re-using recently released large scale 3D datasets and re-purposing them to 360o via rendering. This dataset, which is considerably larger than similar projective datasets, is publicly offered to the community to enable future research in this direction. We use this dataset to learn in an end-to-end fashion the task of depth estimation from 360o images. We show promising results in our synthesized data as well as in unseen realistic images

    A finite model of two-dimensional ideal hydrodynamics

    Full text link
    A finite-dimensional su(NN) Lie algebra equation is discussed that in the infinite NN limit (giving the area preserving diffeomorphism group) tends to the two-dimensional, inviscid vorticity equation on the torus. The equation is numerically integrated, for various values of NN, and the time evolution of an (interpolated) stream function is compared with that obtained from a simple mode truncation of the continuum equation. The time averaged vorticity moments and correlation functions are compared with canonical ensemble averages.Comment: (25 p., 7 figures, not included. MUTP/92/1

    Linked Data Supported Content Analysis for Sociology

    Get PDF
    Philology and hermeneutics as the analysis and interpretation of natural language text in written historical sources are the predecessors of modern content analysis and date back already to antiquity. In empirical social sciences, especially in sociology, content analysis provides valuable insights to social structures and cultural norms of the present and past. With the ever growing amount of text on the web to analyze, also numerous computer-assisted text analysis techniques and tools were developed in sociological research. However, existing methods often go without sufficient standardization. As a consequence, sociological text analysis is lacking transparency, reproducibility and data re-usability. The goal of this paper is to show, how Linked Data principles and Entity Linking techniques can be used to structure, publish and analyze natural language text for sociological research to tackle these shortcomings. This is achieved on the use case of constitutional text documents of the Netherlands from 1884 to 2016 which represent an important contribution to the European cultural heritage. Finally, the generated data is made available and re-usable as Linked Data not only for sociologists, but also for all other researchers in the digital humanities domain interested in the development of constitutions in the Netherlands
    • …
    corecore