142,541 research outputs found

    Progressive construction of a parametric reduced-order model for PDE-constrained optimization

    Full text link
    An adaptive approach to using reduced-order models as surrogates in PDE-constrained optimization is introduced that breaks the traditional offline-online framework of model order reduction. A sequence of optimization problems constrained by a given Reduced-Order Model (ROM) is defined with the goal of converging to the solution of a given PDE-constrained optimization problem. For each reduced optimization problem, the constraining ROM is trained from sampling the High-Dimensional Model (HDM) at the solution of some of the previous problems in the sequence. The reduced optimization problems are equipped with a nonlinear trust-region based on a residual error indicator to keep the optimization trajectory in a region of the parameter space where the ROM is accurate. A technique for incorporating sensitivities into a Reduced-Order Basis (ROB) is also presented, along with a methodology for computing sensitivities of the reduced-order model that minimizes the distance to the corresponding HDM sensitivity, in a suitable norm. The proposed reduced optimization framework is applied to subsonic aerodynamic shape optimization and shown to reduce the number of queries to the HDM by a factor of 4-5, compared to the optimization problem solved using only the HDM, with errors in the optimal solution far less than 0.1%

    Non-parametric deprojection of NIKA SZ observations: Pressure distribution in the Planck-discovered cluster PSZ1 G045.85+57.71

    Get PDF
    The determination of the thermodynamic properties of clusters of galaxies at intermediate and high redshift can bring new insights into the formation of large-scale structures. It is essential for a robust calibration of the mass-observable scaling relations and their scatter, which are key ingredients for precise cosmology using cluster statistics. Here we illustrate an application of high resolution (<20(< 20 arcsec) thermal Sunyaev-Zel'dovich (tSZ) observations by probing the intracluster medium (ICM) of the \planck-discovered galaxy cluster \psz\ at redshift z=0.61z = 0.61, using tSZ data obtained with the NIKA camera, which is a dual-band (150 and 260~GHz) instrument operated at the IRAM 30-meter telescope. We deproject jointly NIKA and \planck\ data to extract the electronic pressure distribution from the cluster core (R0.02R500R \sim 0.02\, R_{500}) to its outskirts (R3R500R \sim 3\, R_{500}) non-parametrically for the first time at intermediate redshift. The constraints on the resulting pressure profile allow us to reduce the relative uncertainty on the integrated Compton parameter by a factor of two compared to the \planck\ value. Combining the tSZ data and the deprojected electronic density profile from \xmm\ allows us to undertake a hydrostatic mass analysis, for which we study the impact of a spherical model assumption on the total mass estimate. We also investigate the radial temperature and entropy distributions. These data indicate that \psz\ is a massive (M5005.5×1014M_{500} \sim 5.5 \times 10^{14} M_{\odot}) cool-core cluster. This work is part of a pilot study aiming at optimizing the treatment of the NIKA2 tSZ large program dedicated to the follow-up of SZ-discovered clusters at intermediate and high redshifts. (abridged)Comment: 16 pages, 10 figure

    Beyond first-order asymptotics for Cox regression

    Get PDF
    To go beyond standard first-order asymptotics for Cox regression, we develop parametric bootstrap and second-order methods. In general, computation of PP-values beyond first order requires more model specification than is required for the likelihood function. It is problematic to specify a censoring mechanism to be taken very seriously in detail, and it appears that conditioning on censoring is not a viable alternative to that. We circumvent this matter by employing a reference censoring model, matching the extent and timing of observed censoring. Our primary proposal is a parametric bootstrap method utilizing this reference censoring model to simulate inferential repetitions of the experiment. It is shown that the most important part of improvement on first-order methods - that pertaining to fitting nuisance parameters - is insensitive to the assumed censoring model. This is supported by numerical comparisons of our proposal to parametric bootstrap methods based on usual random censoring models, which are far more unattractive to implement. As an alternative to our primary proposal, we provide a second-order method requiring less computing effort while providing more insight into the nature of improvement on first-order methods. However, the parametric bootstrap method is more transparent, and hence is our primary proposal. Indications are that first-order partial likelihood methods are usually adequate in practice, so we are not advocating routine use of the proposed methods. It is however useful to see how best to check on first-order approximations, or improve on them, when this is expressly desired.Comment: Published at http://dx.doi.org/10.3150/13-BEJ572 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    A Kolmogorov-Smirnov test for the molecular clock on Bayesian ensembles of phylogenies

    Get PDF
    Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ\lambda of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides again of power.Comment: 14 pages, 9 figures, 8 tables. Minor revision, additin of a new example and new title. Software: https://github.com/FernandoMarcon/PKS_Test.gi

    Semiparametric Inference and Lower Bounds for Real Elliptically Symmetric Distributions

    Full text link
    This paper has a twofold goal. The first aim is to provide a deeper understanding of the family of the Real Elliptically Symmetric (RES) distributions by investigating their intrinsic semiparametric nature. The second aim is to derive a semiparametric lower bound for the estimation of the parametric component of the model. The RES distributions represent a semiparametric model where the parametric part is given by the mean vector and by the scatter matrix while the non-parametric, infinite-dimensional, part is represented by the density generator. Since, in practical applications, we are often interested only in the estimation of the parametric component, the density generator can be considered as nuisance. The first part of the paper is dedicated to conveniently place the RES distributions in the framework of the semiparametric group models. The second part of the paper, building on the mathematical tools previously introduced, the Constrained Semiparametric Cram\'{e}r-Rao Bound (CSCRB) for the estimation of the mean vector and of the constrained scatter matrix of a RES distributed random vector is introduced. The CSCRB provides a lower bound on the Mean Squared Error (MSE) of any robust MM-estimator of mean vector and scatter matrix when no a-priori information on the density generator is available. A closed form expression for the CSCRB is derived. Finally, in simulations, we assess the statistical efficiency of the Tyler's and Huber's scatter matrix MM-estimators with respect to the CSCRB.Comment: This paper has been accepted for publication in IEEE Transactions on Signal Processin

    Short and long-term wind turbine power output prediction

    Get PDF
    In the wind energy industry, it is of great importance to develop models that accurately forecast the power output of a wind turbine, as such predictions are used for wind farm location assessment or power pricing and bidding, monitoring, and preventive maintenance. As a first step, and following the guidelines of the existing literature, we use the supervisory control and data acquisition (SCADA) data to model the wind turbine power curve (WTPC). We explore various parametric and non-parametric approaches for the modeling of the WTPC, such as parametric logistic functions, and non-parametric piecewise linear, polynomial, or cubic spline interpolation functions. We demonstrate that all aforementioned classes of models are rich enough (with respect to their relative complexity) to accurately model the WTPC, as their mean squared error (MSE) is close to the MSE lower bound calculated from the historical data. We further enhance the accuracy of our proposed model, by incorporating additional environmental factors that affect the power output, such as the ambient temperature, and the wind direction. However, all aforementioned models, when it comes to forecasting, seem to have an intrinsic limitation, due to their inability to capture the inherent auto-correlation of the data. To avoid this conundrum, we show that adding a properly scaled ARMA modeling layer increases short-term prediction performance, while keeping the long-term prediction capability of the model
    corecore