4,106 research outputs found

    Power spectrum for the small-scale Universe

    Full text link
    The first objects to arise in a cold dark matter universe present a daunting challenge for models of structure formation. In the ultra small-scale limit, CDM structures form nearly simultaneously across a wide range of scales. Hierarchical clustering no longer provides a guiding principle for theoretical analyses and the computation time required to carry out credible simulations becomes prohibitively high. To gain insight into this problem, we perform high-resolution (N=720^3 - 1584^3) simulations of an Einstein-de Sitter cosmology where the initial power spectrum is P(k) propto k^n, with -2.5 < n < -1. Self-similar scaling is established for n=-1 and n=-2 more convincingly than in previous, lower-resolution simulations and for the first time, self-similar scaling is established for an n=-2.25 simulation. However, finite box-size effects induce departures from self-similar scaling in our n=-2.5 simulation. We compare our results with the predictions for the power spectrum from (one-loop) perturbation theory and demonstrate that the renormalization group approach suggested by McDonald improves perturbation theory's ability to predict the power spectrum in the quasilinear regime. In the nonlinear regime, our power spectra differ significantly from the widely used fitting formulae of Peacock & Dodds and Smith et al. and a new fitting formula is presented. Implications of our results for the stable clustering hypothesis vs. halo model debate are discussed. Our power spectra are inconsistent with predictions of the stable clustering hypothesis in the high-k limit and lend credence to the halo model. Nevertheless, the fitting formula advocated in this paper is purely empirical and not derived from a specific formulation of the halo model.Comment: 30 pages including 10 figures; accepted for publication in MNRA

    Repeated patterns in tree genetic programming

    Get PDF
    We extend our analysis of repetitive patterns found in genetic programming genomes to tree based GP. As in linear GP, repetitive patterns are present in large numbers. Size fair crossover limits bloat in automatic programming, preventing the evolution of recurring motifs. We examine these complex properties in detail: e.g. using depth v. size Catalan binary tree shape plots, subgraph and subtree matching, information entropy, syntactic and semantic fitness correlations and diffuse introns. We relate this emergent phenomenon to considerations about building blocks in GP and how GP works

    The sum of edge lengths in random linear arrangements

    Get PDF
    Spatial networks are networks where nodes are located in a space equipped with a metric. Typically, the space is two-dimensional and until recently and traditionally, the metric that was usually considered was the Euclidean distance. In spatial networks, the cost of a link depends on the edge length, i.e. the distance between the nodes that define the edge. Hypothesizing that there is pressure to reduce the length of the edges of a network requires a null model, e.g., a random layout of the vertices of the network. Here we investigate the properties of the distribution of the sum of edge lengths in random linear arrangement of vertices, that has many applications in different fields. A random linear arrangement consists of an ordering of the elements of the nodes of a network being all possible orderings equally likely. The distance between two vertices is one plus the number of intermediate vertices in the ordering. Compact formulae for the 1st and 2nd moments about zero as well as the variance of the sum of edge lengths are obtained for arbitrary graphs and trees. We also analyze the evolution of that variance in Erdos-Renyi graphs and its scaling in uniformly random trees. Various developments and applications for future research are suggested

    Halo merger tree comparison: impact on galaxy formation models

    Get PDF
    We examine the effect of using different halo finders and merger tree building algorithms on galaxy properties predicted using the GALFORM semi-analytical model run on a high resolution, large volume dark matter simulation. The halo finders/tree builders HBT, ROCKSTAR, SUBFIND, and VELOCI RAPTOR differ in their definitions of halo mass, on whether only spatial or phase-space information is used, and in how they distinguish satellite and main haloes; all of these features have some impact on the model galaxies, even after the trees are post-processed and homogenized by GALFORM. The stellar mass function is insensitive to the halo and merger tree finder adopted. However, we find that the number of central and satellite galaxies in GALFORM does depend slightly on the halo finder/tree builder. The number of galaxies without resolved subhaloes depends strongly on the tree builder, with VELOCIRAPTOR, a phase-space finder, showing the largest population of such galaxies. The distributions of stellar masses, cold and hot gas masses, and star formation rates agree well between different halo finders/tree builders. However, because VELOCIRAPTOR has more early progenitor haloes, with these trees GALFORM produces slightly higher star formation rate densities at high redshift, smaller galaxy sizes, and larger stellar masses for the spheroid component. Since in all cases these differences are small we conclude that, when all of the trees are processed so that the main progenitor mass increases monotonically, the predicted GALFORM galaxy populations are stable and consistent for these four halo finders/tree builders

    The Vadalog System: Datalog-based Reasoning for Knowledge Graphs

    Full text link
    Over the past years, there has been a resurgence of Datalog-based systems in the database community as well as in industry. In this context, it has been recognized that to handle the complex knowl\-edge-based scenarios encountered today, such as reasoning over large knowledge graphs, Datalog has to be extended with features such as existential quantification. Yet, Datalog-based reasoning in the presence of existential quantification is in general undecidable. Many efforts have been made to define decidable fragments. Warded Datalog+/- is a very promising one, as it captures PTIME complexity while allowing ontological reasoning. Yet so far, no implementation of Warded Datalog+/- was available. In this paper we present the Vadalog system, a Datalog-based system for performing complex logic reasoning tasks, such as those required in advanced knowledge graphs. The Vadalog system is Oxford's contribution to the VADA research programme, a joint effort of the universities of Oxford, Manchester and Edinburgh and around 20 industrial partners. As the main contribution of this paper, we illustrate the first implementation of Warded Datalog+/-, a high-performance Datalog+/- system utilizing an aggressive termination control strategy. We also provide a comprehensive experimental evaluation.Comment: Extended version of VLDB paper <https://doi.org/10.14778/3213880.3213888

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    A simple stochastic model for the evolution of protein lengths

    Full text link
    We analyse a simple discrete-time stochastic process for the theoretical modeling of the evolution of protein lengths. At every step of the process a new protein is produced as a modification of one of the proteins already existing and its length is assumed to be random variable which depends only on the length of the originating protein. Thus a Random Recursive Trees (RRT) is produced over the natural integers. If (quasi) scale invariance is assumed, the length distribution in a single history tends to a lognormal form with a specific signature of the deviations from exact gaussianity. Comparison with the very large SIMAP protein database shows good agreement.Comment: 12 pages, 4 figure
    corecore