4,708 research outputs found

    Dissections, orientations, and trees, with applications to optimal mesh encoding and to random sampling

    Full text link
    We present a bijection between some quadrangular dissections of an hexagon and unrooted binary trees, with interesting consequences for enumeration, mesh compression and graph sampling. Our bijection yields an efficient uniform random sampler for 3-connected planar graphs, which turns out to be determinant for the quadratic complexity of the current best known uniform random sampler for labelled planar graphs [{\bf Fusy, Analysis of Algorithms 2005}]. It also provides an encoding for the set P(n)\mathcal{P}(n) of nn-edge 3-connected planar graphs that matches the entropy bound 1nlog2P(n)=2+o(1)\frac1n\log_2|\mathcal{P}(n)|=2+o(1) bits per edge (bpe). This solves a theoretical problem recently raised in mesh compression, as these graphs abstract the combinatorial part of meshes with spherical topology. We also achieve the {optimal parametric rate} 1nlog2P(n,i,j)\frac1n\log_2|\mathcal{P}(n,i,j)| bpe for graphs of P(n)\mathcal{P}(n) with ii vertices and jj faces, matching in particular the optimal rate for triangulations. Our encoding relies on a linear time algorithm to compute an orientation associated to the minimal Schnyder wood of a 3-connected planar map. This algorithm is of independent interest, and it is for instance a key ingredient in a recent straight line drawing algorithm for 3-connected planar graphs [\bf Bonichon et al., Graph Drawing 2005]

    Testing the isotropy of high energy cosmic rays using spherical needlets

    Full text link
    For many decades, ultrahigh energy charged particles of unknown origin that can be observed from the ground have been a puzzle for particle physicists and astrophysicists. As an attempt to discriminate among several possible production scenarios, astrophysicists try to test the statistical isotropy of the directions of arrival of these cosmic rays. At the highest energies, they are supposed to point toward their sources with good accuracy. However, the observations are so rare that testing the distribution of such samples of directional data on the sphere is nontrivial. In this paper, we choose a nonparametric framework that makes weak hypotheses on the alternative distributions and allows in turn to detect various and possibly unexpected forms of anisotropy. We explore two particular procedures. Both are derived from fitting the empirical distribution with wavelet expansions of densities. We use the wavelet frame introduced by [SIAM J. Math. Anal. 38 (2006b) 574-594 (electronic)], the so-called needlets. The expansions are truncated at scale indices no larger than some J{J^{\star}}, and the LpL^p distances between those estimates and the null density are computed. One family of tests (called Multiple) is based on the idea of testing the distance from the null for each choice of J=1,,JJ=1,\ldots,{J^{\star}}, whereas the so-called PlugIn approach is based on the single full J{J^{\star}} expansion, but with thresholded wavelet coefficients. We describe the practical implementation of these two procedures and compare them to other methods in the literature. As alternatives to isotropy, we consider both very simple toy models and more realistic nonisotropic models based on Physics-inspired simulations. The Monte Carlo study shows good performance of the Multiple test, even at moderate sample size, for a wide sample of alternative hypotheses and for different choices of the parameter J{J^{\star}}. On the 69 most energetic events published by the Pierre Auger Collaboration, the needlet-based procedures suggest statistical evidence for anisotropy. Using several values for the parameters of the methods, our procedures yield pp-values below 1%, but with uncontrolled multiplicity issues. The flexibility of this method and the possibility to modify it to take into account a large variety of extensions of the problem make it an interesting option for future investigation of the origin of ultrahigh energy cosmic rays.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS619 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Changing-regime volatility: A fractionally integrated SETAR model

    Get PDF
    This paper presents a 2-regime SETAR model with different long-memory processes in both regimes. We briefly present the memory properties of this model and propose an estimation method. Such a process is applied to the absolute and squared returns of five stock indices. A comparison with simple FARIMA models is made using some forecastibility criteria. Our empirical results suggest that our model offers an interesting alternative competing framework to describe the persistent dynamics in modeling the returns.SETAR ;Long-memory ;Stock indices ;Forecasting

    A SETAR model with long-memory dynamics

    Get PDF
    This paper presents a 2-regime SETAR model where the process under examination is governed by a long-memory process in the first regime and a short-memory process in the second regime. Persistence properties are studied and methods for locating the threshold parameter are proposed. Such a process presents a useful application to financial data and is applied to stock indices and individual assets.

    Modelling squared returns using a SETAR model with long-memory dynamics

    Get PDF
    This paper presents a 2-regime SETAR model for the volatility with a long-memory process in the first regime and a short-memory process in the second regime. Persistence properties are studied and estimation methods are proposed. Such a process is applied to stock indices and individual asset prices.SETAR - Long-memory - FARIMA models - Stock indices

    Un projet du Centre International Blaise Pascal : l’édition électronique des Pensées

    Get PDF
    International audienceLe projet d’une édition électronique des Pensées de Pascal remonte aujourd’hui à près de six ans. La mode est actuellement aux éditions électroniques commerciales, destinées soit à offrir des photographies d’exemplaires inaccessibles à la vente, soit à fournir des textes saisis en vue de permettre des recherches lexicales aisées par ordinateur. Le défaut de telles entreprises et évidemment de se réduire à des reproductions ou des retranscriptions dépourvues de toute réflexion sur la nature particulière des textes. Cela n’a en soi rien de surprenant : entamer une réflexion analytique de fond qui permette de créer un outil informatique particulier exige de très longs délais, un engagement financier et humain lourd, et surtout une connaissance critique approfondie des textes. Or nous avons la chance à Clermont de pouvoir faire coopérer un chercheur littéraire et un analyste informatique
    corecore