4,409 research outputs found

    Slow Convergence in Bootstrap Percolation

    Full text link
    In the bootstrap percolation model, sites in an L by L square are initially infected independently with probability p. At subsequent steps, a healthy site becomes infected if it has at least 2 infected neighbours. As (L,p)->(infinity,0), the probability that the entire square is eventually infected is known to undergo a phase transition in the parameter p log L, occurring asymptotically at lambda = pi^2/18. We prove that the discrepancy between the critical parameter and its limit lambda is at least Omega((log L)^(-1/2)). In contrast, the critical window has width only Theta((log L)^(-1)). For the so-called modified model, we prove rigorous explicit bounds which imply for example that the relative discrepancy is at least 1% even when L = 10^3000. Our results shed some light on the observed differences between simulations and rigorous asymptotics.Comment: 22 pages, 3 figure

    Wavelet Analysis and Denoising: New Tools for Economists

    Get PDF
    This paper surveys the techniques of wavelets analysis and the associated methods of denoising. The Discrete Wavelet Transform and its undecimated version, the Maximum Overlapping Discrete Wavelet Transform, are described. The methods of wavelets analysis can be used to show how the frequency content of the data varies with time. This allows us to pinpoint in time such events as major structural breaks. The sparse nature of the wavelets representation also facilitates the process of noise reduction by nonlinear wavelet shrinkage , which can be used to reveal the underlying trends in economic data. An application of these techniques to the UK real GDP (1873-2001) is described. The purpose of the analysis is to reveal the true structure of the data - including its local irregularities and abrupt changes - and the results are surprising.Wavelets, Denoising, Structural breaks, Trend estimation

    Effects of local event-by-event conservation laws in ultrarelativistic heavy-ion collisions at particlization

    Get PDF
    Many simulations of relativistic heavy-ion collisions involve the switching from relativistic hydrodynamics to kinetic particle transport. This switching entails the sampling of particles from the distribution of energy, momentum, and conserved currents provided by hydrodynamics. Usually, this sampling ensures the conservation of these quantities only on the average, i.e., the conserved quantities may actually fluctuate among the sampled particle configurations and only their averages over many such configurations agree with their values from hydrodynamics. Here we apply a recently invented method [D. Oliinychenko and V. Koch, Phys. Rev. Lett. 123, 182302 (2019)PRLTAO0031-900710.1103/PhysRevLett.123.182302] to ensure conservation laws for each sampled configuration in spatially compact regions (patches) and study their effects: From the well-known (micro-)canonical suppression of means and variances to little studied (micro-)canonical correlations and higher-order fluctuations. Most of these effects are sensitive to the patch size. Many of them do not disappear even in the thermodynamic limit, when the patch size goes to infinity. The developed method is essential for particlization of stochastic hydrodynamics. It is useful for studying the chiral magnetic effect, small systems, and in general for fluctuation and correlation observables

    An open-system approach for the characterization of spatio-temporal chaos

    Full text link
    We investigate the structure of the invariant measure of space-time chaos by adopting an "open-system" point of view. We consider large but finite windows of formally infinite one-dimensional lattices and quantify the effect of the interaction with the outer region by mapping the problem on the dynamical characterization of localized perturbations. This latter task is performed by suitably generalizing the concept of Lyapunov spectrum to cope with perturbations that propagate outside the region under investigation. As a result, we are able to introduce a "volume"-propagation velocity, i.e. the velocity with which ensembles of localized perturbations tend to fill volumes in the neighbouring regions.Comment: Submitted to J.Stat.Phys. - 26 pages, 7 eps-figures included. Keywords: High-dimensional Chaos; Fractals; Coupled map lattices; Numerical simulations of chaotic model

    Image segmentation in the wavelet domain using N-cut framework

    Full text link
    We introduce a wavelet domain image segmentation algorithm based on Normalized Cut (NCut) framework in this thesis. By employing the NCut algorithm we solve the perceptual grouping problem of image segmentation which aims at the extraction of the global impression of an image. We capitalize on the reduced set of data to be processed and statistical features derived from the wavelet-transformed images to solve graph partitioning more efficiently than before. Five orientation histograms are computed to evaluate similarity/dissimilarity measure of local structure. We use properties of the wavelet transform filtering to capture edge information in vertical, horizontal and diagonal orientations. This approach allows for direct processing of compressed data and results in faster implementation of NCut framework than that in the spatial domain and also decent quality of segmentation of natural scene images

    ECG Signal Compression Using Discrete Wavelet Transform

    Get PDF

    On the critical exponents of random k-SAT

    Full text link
    There has been much recent interest in the satisfiability of random Boolean formulas. A random k-SAT formula is the conjunction of m random clauses, each of which is the disjunction of k literals (a variable or its negation). It is known that when the number of variables n is large, there is a sharp transition from satisfiability to unsatisfiability; in the case of 2-SAT this happens when m/n --> 1, for 3-SAT the critical ratio is thought to be m/n ~ 4.2. The sharpness of this transition is characterized by a critical exponent, sometimes called \nu=\nu_k (the smaller the value of \nu the sharper the transition). Experiments have suggested that \nu_3 = 1.5+-0.1, \nu_4 = 1.25+-0.05, \nu_5=1.1+-0.05, \nu_6 = 1.05+-0.05, and heuristics have suggested that \nu_k --> 1 as k --> infinity. We give here a simple proof that each of these exponents is at least 2 (provided the exponent is well-defined). This result holds for each of the three standard ensembles of random k-SAT formulas: m clauses selected uniformly at random without replacement, m clauses selected uniformly at random with replacement, and each clause selected with probability p independent of the other clauses. We also obtain similar results for q-colorability and the appearance of a q-core in a random graph.Comment: 11 pages. v2 has revised introduction and updated reference
    corecore