40,954 research outputs found

    Lossy network correlated data gathering with high-resolution coding

    Get PDF
    Sensor networks measuring correlated data are considered, where the task is to gather data from the network nodes to a sink. A specific scenario is addressed, where data at nodes are lossy coded with high-resolution, and the information measured by the nodes has to be reconstructed at the sink within both certain total and individual distortion bounds. The first problem considered is to find the optimal transmission structure and the rate-distortion allocations at the various spatially located nodes, such as to minimize the total power consumption cost of the network, by assuming fixed nodes positions. The optimal transmission structure is the shortest path tree and the problems of rate and distortion allocation separate in the high-resolution case, namely, first the distortion allocation is found as a function of the transmission structure, and second, for a given distortion allocation, the rate allocation is computed. The second problem addressed is the case when the node positions can be chosen, by finding the optimal node placement for two different targets of interest, namely total power minimization and network lifetime maximization. Finally, a node placement solution that provides a tradeoff between the two metrics is proposed

    Gaussian field theories, random Cantor sets and multifractality

    Full text link
    The computation of multifractal scaling properties associated with a critical field theory involves non-local operators and remains an open problem using conventional techniques of field theory. We propose a new description of Gaussian field theories in terms of random Cantor sets and show how universal multifractal scaling exponents can be calculated. We use this approach to characterize the multifractal critical wave function of Dirac fermions interacting with a random vector potential in two spatial dimensions. We show that the multifractal scaling exponents are self-averaging.Comment: Extensive modifications of previous version; exact results replace numerical calculation

    Real Option Valuation of a Portfolio of Oil Projects

    Get PDF
    Various methodologies exist for valuing companies and their projects. We address the problem of valuing a portfolio of projects within companies that have infrequent, large and volatile cash flows. Examples of this type of company exist in oil exploration and development and we will use this example to illustrate our analysis throughout the thesis. The theoretical interest in this problem lies in modeling the sources of risk in the projects and their different interactions within each project. Initially we look at the advantages of real options analysis and compare this approach with more traditional valuation methods, highlighting strengths and weaknesses ofeach approach in the light ofthe thesis problem. We give the background to the stages in an oil exploration and development project and identify the main common sources of risk, for example commodity prices. We discuss the appropriate representation for oil prices; in short, do oil prices behave more like equities or more like interest rates? The appropriate representation is used to model oil price as a source ofrisk. A real option valuation model based on market uncertainty (in the form of oil price risk) and geological uncertainty (reserve volume uncertainty) is presented and tested for two different oil projects. Finally, a methodology to measure the inter-relationship between oil price and other sources of risk such as interest rates is proposed using copula methods.Imperial Users onl

    Slingshot: cell lineage and pseudotime inference for single-cell transcriptomics.

    Get PDF
    BackgroundSingle-cell transcriptomics allows researchers to investigate complex communities of heterogeneous cells. It can be applied to stem cells and their descendants in order to chart the progression from multipotent progenitors to fully differentiated cells. While a variety of statistical and computational methods have been proposed for inferring cell lineages, the problem of accurately characterizing multiple branching lineages remains difficult to solve.ResultsWe introduce Slingshot, a novel method for inferring cell lineages and pseudotimes from single-cell gene expression data. In previously published datasets, Slingshot correctly identifies the biological signal for one to three branching trajectories. Additionally, our simulation study shows that Slingshot infers more accurate pseudotimes than other leading methods.ConclusionsSlingshot is a uniquely robust and flexible tool which combines the highly stable techniques necessary for noisy single-cell data with the ability to identify multiple trajectories. Accurate lineage inference is a critical step in the identification of dynamic temporal gene expression

    Fundamental Discreteness Limitations of Cosmological N-Body Clustering Simulations

    Full text link
    We explore some of the effects that discreteness and two-body scattering may have on N-body simulations with ``realistic'' cosmological initial conditions. We use an identical subset of particles from the initial conditions for a 1283128^3 Particle-Mesh (PM) calculation as the initial conditions for a variety P3^3M and Tree code runs. We investigate the effect of mass resolution (the mean interparticle separation) since most ``high resolution'' codes only have high resolution in gravitational force. The phase-insensitive two--point statistics, such as the power spectrum (autocorrelation) are somewhat affected by these variations, but phase-sensitive statistics show greater differences. Results converge at the mean interparticle separation scale of the lowest mass-resolution code. As more particles are added, but the force resolution is held constant, the P3^3M and the Tree runs agree more and more strongly with each other and with the PM run which had the same initial conditions. This shows high particle density is necessary for correct time evolution, since many different results cannot all be correct. However, they do not so converge to a PM run which continued the fluctuations to small scales. Our results show that ignoring them is a major source of error on comoving scales of the missing wavelengths. This can be resolved by putting in a high particle density. Since the codes never agree well on scales below the mean comoving interparticle separation, we find little justification for quantitative predictions on this scale. Some measures vary by 50%, but others can be off by a factor of three or more. Our results suggest possible problems with the density of galaxy halos, formation of early generation objects such as QSO absorber clouds, etc.Comment: Revised version to be published in Astrophysical Journal. One figure changed; expanded discussion, more information on code parameters. Latex, 44 pages, including 19 figures. Higher resolution versions of Figures 10-15 available at: ftp://kusmos.phsx.ukans.edu/preprints/nbod

    From linear to non-linear scales: analytical and numerical predictions for the weak lensing convergence

    Full text link
    Weak lensing convergence can be used directly to map and probe the dark mass distribution in the universe. Building on earlier studies, we recall how the statistics of the convergence field are related to the statistics of the underlying mass distribution, in particular to the many-body density correlations. We describe two model-independent approximations which provide two simple methods to compute the probability distribution function, pdf, of the convergence. We apply one of these to the case where the density field can be described by a log-normal pdf. Next, we discuss two hierarchical models for the high-order correlations which allow one to perform exact calculations and evaluate the previous approximations in such specific cases. Finally, we apply these methods to a very simple model for the evolution of the density field from linear to highly non-linear scales. Comparisons with the results obtained from numerical simulations, obtained from a number of different realizations, show excellent agreement with our theoretical predictions. We have probed various angular scales in the numerical work and considered sources at 14 different redshifts in each of two different cosmological scenarios, an open cosmology and a flat cosmology with non-zero cosmological constant. Our simulation technique employs computations of the full 3-d shear matrices along the line of sight from the source redshift to the observer and is complementary to more popular ray-tracing algorithms. Our results therefore provide a valuable cross-check for such complementary simulation techniques, as well as for our simple analytical model, from the linear to the highly non-linear regime.Comment: 20 pages, final version published in MNRA
    corecore