668 research outputs found

    Limit theorems for random point measures generated by cooperative sequential adsorption

    Full text link
    We consider a finite sequence of random points in a finite domain of a finite-dimensional Euclidean space. The points are sequentially allocated in the domain according to a model of cooperative sequential adsorption. The main peculiarity of the model is that the probability distribution of a point depends on previously allocated points. We assume that the dependence vanishes as the concentration of points tends to infinity. Under this assumption the law of large numbers, the central limit theorem and Poisson approximation are proved for the generated sequence of random point measures.Comment: 17 page

    Optimization Under Uncertainty Using the Generalized Inverse Distribution Function

    Full text link
    A framework for robust optimization under uncertainty based on the use of the generalized inverse distribution function (GIDF), also called quantile function, is here proposed. Compared to more classical approaches that rely on the usage of statistical moments as deterministic attributes that define the objectives of the optimization process, the inverse cumulative distribution function allows for the use of all the possible information available in the probabilistic domain. Furthermore, the use of a quantile based approach leads naturally to a multi-objective methodology which allows an a-posteriori selection of the candidate design based on risk/opportunity criteria defined by the designer. Finally, the error on the estimation of the objectives due to the resolution of the GIDF will be proven to be quantifiableComment: 20 pages, 25 figure

    Model selection in High-Dimensions: A Quadratic-risk based approach

    Full text link
    In this article we propose a general class of risk measures which can be used for data based evaluation of parametric models. The loss function is defined as generalized quadratic distance between the true density and the proposed model. These distances are characterized by a simple quadratic form structure that is adaptable through the choice of a nonnegative definite kernel and a bandwidth parameter. Using asymptotic results for the quadratic distances we build a quick-to-compute approximation for the risk function. Its derivation is analogous to the Akaike Information Criterion (AIC), but unlike AIC, the quadratic risk is a global comparison tool. The method does not require resampling, a great advantage when point estimators are expensive to compute. The method is illustrated using the problem of selecting the number of components in a mixture model, where it is shown that, by using an appropriate kernel, the method is computationally straightforward in arbitrarily high data dimensions. In this same context it is shown that the method has some clear advantages over AIC and BIC.Comment: Updated with reviewer suggestion

    Stochastic Flux-Freezing and Magnetic Dynamo

    Full text link
    We argue that magnetic flux-conservation in turbulent plasmas at high magnetic Reynolds numbers neither holds in the conventional sense nor is entirely broken, but instead is valid in a novel statistical sense associated to the "spontaneous stochasticity" of Lagrangian particle tra jectories. The latter phenomenon is due to the explosive separation of particles undergoing turbulent Richardson diffusion, which leads to a breakdown of Laplacian determinism for classical dynamics. We discuss empirical evidence for spontaneous stochasticity, including our own new numerical results. We then use a Lagrangian path-integral approach to establish stochastic flux-freezing for resistive hydromagnetic equations and to argue, based on the properties of Richardson diffusion, that flux-conservation must remain stochastic at infinite magnetic Reynolds number. As an important application of these results we consider the kinematic, fluctuation dynamo in non-helical, incompressible turbulence at unit magnetic Prandtl number. We present results on the Lagrangian dynamo mechanisms by a stochastic particle method which demonstrate a strong similarity between the Pr = 1 and Pr = 0 dynamos. Stochasticity of field-line motion is an essential ingredient of both. We finally consider briefly some consequences for nonlinear MHD turbulence, dynamo and reconnectionComment: 29 pages, 10 figure

    Pareto versus lognormal: a maximum entropy test

    Get PDF
    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units

    Finite size effects and the order of a phase transition in fragmenting nuclear systems

    Get PDF
    We discuss the implications of finite size effects on the determination of the order of a phase transition which may occur in infinite systems. We introduce a specific model to which we apply different tests. They are aimed to characterise the smoothed transition observed in a finite system. We show that the microcanonical ensemble may be a useful framework for the determination of the nature of such transitions.Comment: LateX, 5 pages, 5 figures; Fig. 1 change

    Minkowski distances and standardisation for clustering and classification of high dimensional data

    Full text link
    There are many distance-based methods for classification and clustering, and for data with a high number of dimensions and a lower number of observations, processing distances is computationally advantageous compared to the raw data matrix. Euclidean distances are used as a default for continuous multivariate data, but there are alternatives. Here the so-called Minkowski distances, L1L_1 (city block)-, L2L_2 (Euclidean)-, L3L_3-, L4L_4-, and maximum distances are combined with different schemes of standardisation of the variables before aggregating them. Boxplot transformation is proposed, a new transformation method for a single variable that standardises the majority of observations but brings outliers closer to the main bulk of the data. Distances are compared in simulations for clustering by partitioning around medoids, complete and average linkage, and classification by nearest neighbours, of data with a low number of observations but high dimensionality. The L1L_1-distance and the boxplot transformation show good results.Comment: Preliminary version; final version to be published by Springer, using Springer's svmult LATEX styl

    Breakup Density in Spectator Fragmentation

    Full text link
    Proton-proton correlations and correlations of protons, deuterons and tritons with alpha particles from spectator decays following 197Au + 197Au collisions at 1000 MeV per nucleon have been measured with two highly efficient detector hodoscopes. The constructed correlation functions, interpreted within the approximation of a simultaneous volume decay, indicate a moderate expansion and low breakup densities, similar to assumptions made in statistical multifragmentation models. PACS numbers: 25.70.Pq, 21.65.+f, 25.70.Mn, 25.75.GzComment: 11 pages, LaTeX with 3 included figures; Also available from http://www-kp3.gsi.de/www/kp3/aladin_publications.htm

    Mass dependence of light nucleus production in ultrarelativistic heavy ion collisions

    Full text link
    Light nuclei can be produced in the central reaction zone via coalescence in relativistic heavy ion collisions. E864 at BNL has measured the production of ten light nuclei with nuclear number of A=1 to A=7 at rapidity y1.9y\simeq1.9 and pT/A300MeV/cp_{T}/A\leq300MeV/c. Data were taken with a Au beam of momentum of 11.5 A GeV/cGeV/c on a Pb or Pt target with different experimental settings. The invariant yields show a striking exponential dependence on nuclear number with a penalty factor of about 50 per additional nucleon. Detailed analysis reveals that the production may depend on the spin factor of the nucleus and the nuclear binding energy as well.Comment: (6 pages, 3 figures), some changes on text, references and figures' lettering. To be published in PRL (13Dec1999

    Tight Finite-Key Analysis for Quantum Cryptography

    Get PDF
    Despite enormous progress both in theoretical and experimental quantum cryptography, the security of most current implementations of quantum key distribution is still not established rigorously. One of the main problems is that the security of the final key is highly dependent on the number, M, of signals exchanged between the legitimate parties. While, in any practical implementation, M is limited by the available resources, existing security proofs are often only valid asymptotically for unrealistically large values of M. Here, we demonstrate that this gap between theory and practice can be overcome using a recently developed proof technique based on the uncertainty relation for smooth entropies. Specifically, we consider a family of Bennett-Brassard 1984 quantum key distribution protocols and show that security against general attacks can be guaranteed already for moderate values of M.Comment: 11 pages, 2 figure
    corecore