7,378 research outputs found

    PDE-Foam - a probability-density estimation method using self-adapting phase-space binning

    Full text link
    Probability Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. In this paper, we present a modification of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multi-dimensional phase space, minimising the variance of the signal and background densities inside the cells. The implementation of the binning algorithm PDE-Foam is based on the MC event-generation package Foam. We present performance results for representative examples (toy models) and discuss the dependence of the obtained results on the choice of parameters. The new PDE-Foam shows improved classification capability for small training samples and reduced classification time compared to the original PDE method based on range searching.Comment: 19 pages, 11 figures; replaced with revised version accepted for publication in NIM A and corrected typos in description of Fig. 7 and

    A statistical physics perspective on criticality in financial markets

    Full text link
    Stock markets are complex systems exhibiting collective phenomena and particular features such as synchronization, fluctuations distributed as power-laws, non-random structures and similarity to neural networks. Such specific properties suggest that markets operate at a very special point. Financial markets are believed to be critical by analogy to physical systems but few statistically founded evidence have been given. Through a data-based methodology and comparison to simulations inspired by statistical physics of complex systems, we show that the Dow Jones and indices sets are not rigorously critical. However, financial systems are closer to the criticality in the crash neighborhood.Comment: 23 pages, 19 figure

    Rejuvenating Power Spectra II: the Gaussianized galaxy density field

    Full text link
    We find that, even in the presence of discreteness noise, a Gaussianizing transform (producing a more-Gaussian one-point distribution) reduces nonlinearities in the power spectra of cosmological matter and galaxy density fields, in many cases drastically. Although Gaussianization does increase the effective shot noise, it also increases the power spectrum's fidelity to the linear power spectrum on scales where the shot noise is negligible. Gaussianizing also increases the Fisher information in the power spectrum in all cases and resolutions, although the gains are smaller in redshift space than in real space. We also find that the gain in cumulative Fisher information from Gaussianizing peaks at a particular grid resolution that depends on the sampling level.Comment: Slight changes to match version accepted to ApJ. 7 pages, 8 figure

    Effects of the Generation Size and Overlap on Throughput and Complexity in Randomized Linear Network Coding

    Full text link
    To reduce computational complexity and delay in randomized network coded content distribution, and for some other practical reasons, coding is not performed simultaneously over all content blocks, but over much smaller, possibly overlapping subsets of these blocks, known as generations. A penalty of this strategy is throughput reduction. To analyze the throughput loss, we model coding over generations with random generation scheduling as a coupon collector's brotherhood problem. This model enables us to derive the expected number of coded packets needed for successful decoding of the entire content as well as the probability of decoding failure (the latter only when generations do not overlap) and further, to quantify the tradeoff between computational complexity and throughput. Interestingly, with a moderate increase in the generation size, throughput quickly approaches link capacity. Overlaps between generations can further improve throughput substantially for relatively small generation sizes.Comment: To appear in IEEE Transactions on Information Theory Special Issue: Facets of Coding Theory: From Algorithms to Networks, Feb 201

    Using neutral cline decay to estimate contemporary dispersal: a generic tool and its application to a major crop pathogen

    Get PDF
    Dispersal is a key parameter of adaptation, invasion and persistence. Yet standard population genetics inference methods hardly distinguish it from drift and many species cannot be studied by direct mark-recapture methods. Here, we introduce a method using rates of change in cline shapes for neutral markers to estimate contemporary dispersal. We apply it to the devastating banana pest Mycosphaerella fijiensis, a wind-dispersed fungus for which a secondary contact zone had previously been detected using landscape genetics tools. By tracking the spatio-temporal frequency change of 15 microsatellite markers, we find that σ, the standard deviation of parent–offspring dispersal distances, is 1.2 km/generation1/2. The analysis is further shown robust to a large range of dispersal kernels. We conclude that combining landscape genetics approaches to detect breaks in allelic frequencies with analyses of changes in neutral genetic clines offers a powerful way to obtain ecologically relevant estimates of dispersal in many species

    Intelligent sampling for the measurement of structured surfaces

    Get PDF
    Uniform sampling in metrology has known drawbacks such as coherent spectral aliasing and a lack of efficiency in terms of measuring time and data storage. The requirement for intelligent sampling strategies has been outlined over recent years, particularly where the measurement of structured surfaces is concerned. Most of the present research on intelligent sampling has focused on dimensional metrology using coordinate-measuring machines with little reported on the area of surface metrology. In the research reported here, potential intelligent sampling strategies for surface topography measurement of structured surfaces are investigated by using numerical simulation and experimental verification. The methods include the jittered uniform method, low-discrepancy pattern sampling and several adaptive methods which originate from computer graphics, coordinate metrology and previous research by the authors. By combining the use of advanced reconstruction methods and feature-based characterization techniques, the measurement performance of the sampling methods is studied using case studies. The advantages, stability and feasibility of these techniques for practical measurements are discussed
    corecore