1,166 research outputs found

    Des modèles biologiques à l'amélioration des plantes

    Get PDF

    Level-Based Analysis of the Population-Based Incremental Learning Algorithm

    Get PDF
    The Population-Based Incremental Learning (PBIL) algorithm uses a convex combination of the current model and the empirical model to construct the next model, which is then sampled to generate offspring. The Univariate Marginal Distribution Algorithm (UMDA) is a special case of the PBIL, where the current model is ignored. Dang and Lehre (GECCO 2015) showed that UMDA can optimise LeadingOnes efficiently. The question still remained open if the PBIL performs equally well. Here, by applying the level-based theorem in addition to Dvoretzky--Kiefer--Wolfowitz inequality, we show that the PBIL optimises function LeadingOnes in expected time O(nλlogλ+n2)\mathcal{O}(n\lambda \log \lambda + n^2) for a population size λ=Ω(logn)\lambda = \Omega(\log n), which matches the bound of the UMDA. Finally, we show that the result carries over to BinVal, giving the fist runtime result for the PBIL on the BinVal problem.Comment: To appea

    Aubry sets vs Mather sets in two degrees of freedom

    Full text link
    We study autonomous Tonelli Lagrangians on closed surfaces. We aim to clarify the relationship between the Aubry set and the Mather set, when the latter consists of periodic orbits which are not fixed points. Our main result says that in that case the Aubry set and the Mather set almost always coincide.Comment: Revised and expanded version. New proof of Lemma 2.3 (formerly Lemma 14

    Polynomial growth of volume of balls for zero-entropy geodesic systems

    Full text link
    The aim of this paper is to state and prove polynomial analogues of the classical Manning inequality relating the topological entropy of a geodesic flow with the growth rate of the volume of balls in the universal covering. To this aim we use two numerical conjugacy invariants, the {\em strong polynomial entropy hpolh_{pol}} and the {\em weak polynomial entropy hpolh_{pol}^*}. Both are infinite when the topological entropy is positive and they satisfy hpolhpolh_{pol}^*\leq h_{pol}. We first prove that the growth rate of the volume of balls is bounded above by means of the strong polynomial entropy and we show that for the flat torus this inequality becomes an equality. We then study the explicit example of the torus of revolution for which we can give an exact asymptotic equivalent of the growth rate of volume of balls, which we relate to the weak polynomial entropy.Comment: 22 page

    A linear CO chemistry parameterization in a chemistry-transport model: evaluation and application to data assimilation

    Get PDF
    This paper presents an evaluation of a new linear parameterization valid for the troposphere and the stratosphere, based on a first order approximation of the carbon monoxide (CO) continuity equation. This linear scheme (hereinafter noted LINCO) has been implemented in the 3-D Chemical Transport Model (CTM) MOCAGE (MOdèle de Chimie Atmospherique Grande Echelle). First, a one and a half years of LINCO simulation has been compared to output obtained from a detailed chemical scheme output. The mean differences between both schemes are about ±25 ppbv (part per billion by volume) or 15% in the troposphere and ±10 ppbv or 100% in the stratosphere. Second, LINCO has been compared to diverse observations from satellite instruments covering the troposphere (Measurements Of Pollution In The Troposphere: MOPITT) and the stratosphere (Microwave Limb Sounder: MLS) and also from aircraft (Measurements of ozone and water vapour by Airbus in-service aircraft: MOZAIC programme) mostly flying in the upper troposphere and lower stratosphere (UTLS). In the troposphere, the LINCO seasonal variations as well as the vertical and horizontal distributions are quite close to MOPITT CO observations. However, a bias of ~−40 ppbv is observed at 700 Pa between LINCO and MOPITT. In the stratosphere, MLS and LINCO present similar large-scale patterns, except over the poles where the CO concentration is underestimated by the model. In the UTLS, LINCO presents small biases less than 2% compared to independent MOZAIC profiles. Third, we assimilated MOPITT CO using a variational 3D-FGAT (First Guess at Appropriate Time) method in conjunction with MOCAGE for a long run of one and a half years. The data assimilation greatly improves the vertical CO distribution in the troposphere from 700 to 350 hPa compared to independent MOZAIC profiles. At 146 hPa, the assimilated CO distribution is also improved compared to MLS observations by reducing the bias up to a factor of 2 in the tropics. This study confirms that the linear scheme is able to simulate reasonably well the CO distribution in the troposphere and in the lower stratosphere. Therefore, the low computing cost of the linear scheme opens new perspectives to make free runs and CO data assimilation runs at high resolution and over periods of several years

    Adaptive density estimation for stationary processes

    Get PDF
    We propose an algorithm to estimate the common density ss of a stationary process X1,...,XnX_1,...,X_n. We suppose that the process is either β\beta or τ\tau-mixing. We provide a model selection procedure based on a generalization of Mallows' CpC_p and we prove oracle inequalities for the selected estimator under a few prior assumptions on the collection of models and on the mixing coefficients. We prove that our estimator is adaptive over a class of Besov spaces, namely, we prove that it achieves the same rates of convergence as in the i.i.d framework

    Level set based eXtended finite element modelling of the response of fibrous networks under hygroscopic swelling

    Full text link
    Materials like paper, consisting of a network of natural fibres, exposed to variations in moisture, undergo changes in geometrical and mechanical properties. This behaviour is particularly important for understanding the hygro-mechanical response of sheets of paper in applications like digital printing. A two-dimensional microstructural model of a fibrous network is therefore developed to upscale the hygro-expansion of individual fibres, through their interaction, to the resulting overall expansion of the network. The fibres are modelled with rectangular shapes and are assumed to be perfectly bonded where they overlap. For realistic networks the number of bonds is large and the network is geometrically so complex that discretizing it by conventional, geometry-conforming, finite elements is cumbersome. The combination of a level-set and XFEM formalism enables the use of regular, structured grids in order to model the complex microstructural geometry. In this approach, the fibres are described implicitly by a level-set function. In order to represent the fibre boundaries in the fibrous network, an XFEM discretization is used together with a Heaviside enrichment function. Numerical results demonstrate that the proposed approach successfully captures the hygro-expansive properties of the network with fewer degrees of freedom compared to classical FEM, preserving desired accuracy.Comment: 27 pages, 22 figures, 4 tables, J. Appl. Mech. June 19, 202

    Combined assimilation of IASI and MLS observations to constrain tropospheric and stratospheric ozone in a global chemical transport model

    Get PDF
    Accurate and temporally resolved fields of free-troposphere ozone are of major importance to quantify the intercontinental transport of pollution and the ozone radiative forcing. We consider a global chemical transport model (MOdèle de Chimie Atmosphérique à Grande Échelle, MOCAGE) in combination with a linear ozone chemistry scheme to examine the impact of assimilating observations from the Microwave Limb Sounder (MLS) and the Infrared Atmospheric Sounding Interferometer (IASI). The assimilation of the two instruments is performed by means of a variational algorithm (4D-VAR) and allows to constrain stratospheric and tropospheric ozone simultaneously. The analysis is first computed for the months of August and November 2008 and validated against ozonesonde measurements to verify the presence of observations and model biases. Furthermore, a longer analysis of 6 months (July–December 2008) showed that the combined assimilation of MLS and IASI is able to globally reduce the uncertainty (root mean square error, RMSE) of the modeled ozone columns from 30 to 15% in the upper troposphere/lower stratosphere (UTLS, 70–225 hPa). The assimilation of IASI tropospheric ozone observations (1000–225 hPa columns, TOC – tropospheric O<sub>3</sub> column) decreases the RMSE of the model from 40 to 20% in the tropics (30° S–30° N), whereas it is not effective at higher latitudes. Results are confirmed by a comparison with additional ozone data sets like the Measurements of OZone and wAter vapour by aIrbus in-service airCraft (MOZAIC) data, the Ozone Monitoring Instrument (OMI) total ozone columns and several high-altitude surface measurements. Finally, the analysis is found to be insensitive to the assimilation parameters. We conclude that the combination of a simplified ozone chemistry scheme with frequent satellite observations is a valuable tool for the long-term analysis of stratospheric and free-tropospheric ozone

    Sparsity and Incoherence in Compressive Sampling

    Get PDF
    We consider the problem of reconstructing a sparse signal x0Rnx^0\in\R^n from a limited number of linear measurements. Given mm randomly selected samples of Ux0U x^0, where UU is an orthonormal matrix, we show that 1\ell_1 minimization recovers x0x^0 exactly when the number of measurements exceeds mConstμ2(U)Slogn, m\geq \mathrm{Const}\cdot\mu^2(U)\cdot S\cdot\log n, where SS is the number of nonzero components in x0x^0, and μ\mu is the largest entry in UU properly normalized: μ(U)=nmaxk,jUk,j\mu(U) = \sqrt{n} \cdot \max_{k,j} |U_{k,j}|. The smaller μ\mu, the fewer samples needed. The result holds for ``most'' sparse signals x0x^0 supported on a fixed (but arbitrary) set TT. Given TT, if the sign of x0x^0 for each nonzero entry on TT and the observed values of Ux0Ux^0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples
    corecore