170,903 research outputs found

    VERYFICATION OF LOCATION PROBLEM IN ECONOMIC RESEARCH

    Get PDF
    In economic research very often the location problem in the single sample or estimation  of the difference in two samples location is commonly tested by experimental economists. Usually the used tests are Wilcoxon test for single sample location or Wilcoxon – Mann – Whitney for two samples location problem. Unfortunately those tests have some disadvantages such as robustness against assumptions or week efficiency. In the paper, some less known procedures, which allow avoid those problems, will be presented. Considered methods will be illustrated on the example of the data  analysis from real-estate market

    mfEGRA: Multifidelity Efficient Global Reliability Analysis through Active Learning for Failure Boundary Location

    Full text link
    This paper develops mfEGRA, a multifidelity active learning method using data-driven adaptively refined surrogates for failure boundary location in reliability analysis. This work addresses the issue of prohibitive cost of reliability analysis using Monte Carlo sampling for expensive-to-evaluate high-fidelity models by using cheaper-to-evaluate approximations of the high-fidelity model. The method builds on the Efficient Global Reliability Analysis (EGRA) method, which is a surrogate-based method that uses adaptive sampling for refining Gaussian process surrogates for failure boundary location using a single-fidelity model. Our method introduces a two-stage adaptive sampling criterion that uses a multifidelity Gaussian process surrogate to leverage multiple information sources with different fidelities. The method combines expected feasibility criterion from EGRA with one-step lookahead information gain to refine the surrogate around the failure boundary. The computational savings from mfEGRA depends on the discrepancy between the different models, and the relative cost of evaluating the different models as compared to the high-fidelity model. We show that accurate estimation of reliability using mfEGRA leads to computational savings of \sim46% for an analytic multimodal test problem and 24% for a three-dimensional acoustic horn problem, when compared to single-fidelity EGRA. We also show the effect of using a priori drawn Monte Carlo samples in the implementation for the acoustic horn problem, where mfEGRA leads to computational savings of 45% for the three-dimensional case and 48% for a rarer event four-dimensional case as compared to single-fidelity EGRA

    Rank-based optimal tests of the adequacy of an elliptic VARMA model

    Full text link
    We are deriving optimal rank-based tests for the adequacy of a vector autoregressive-moving average (VARMA) model with elliptically contoured innovation density. These tests are based on the ranks of pseudo-Mahalanobis distances and on normed residuals computed from Tyler's [Ann. Statist. 15 (1987) 234-251] scatter matrix; they generalize the univariate signed rank procedures proposed by Hallin and Puri [J. Multivariate Anal. 39 (1991) 1-29]. Two types of optimality properties are considered, both in the local and asymptotic sense, a la Le Cam: (a) (fixed-score procedures) local asymptotic minimaxity at selected radial densities, and (b) (estimated-score procedures) local asymptotic minimaxity uniform over a class F of radial densities. Contrary to their classical counterparts, based on cross-covariance matrices, these tests remain valid under arbitrary elliptically symmetric innovation densities, including those with infinite variance and heavy-tails. We show that the AREs of our fixed-score procedures, with respect to traditional (Gaussian) methods, are the same as for the tests of randomness proposed in Hallin and Paindaveine [Bernoulli 8 (2002b) 787-815]. The multivariate serial extensions of the classical Chernoff-Savage and Hodges-Lehmann results obtained there thus also hold here; in particular, the van der Waerden versions of our tests are uniformly more powerful than those based on cross-covariances. As for our estimated-score procedures, they are fully adaptive, hence, uniformly optimal over the class of innovation densities satisfying the required technical assumptions.Comment: Published at http://dx.doi.org/10.1214/009053604000000724 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    High-dimensional change-point detection with sparse alternatives

    Full text link
    We consider the problem of detecting a change in mean in a sequence of Gaussian vectors. Under the alternative hypothesis, the change occurs only in some subset of the components of the vector. We propose a test of the presence of a change-point that is adaptive to the number of changing components. Under the assumption that the vector dimension tends to infinity and the length of the sequence grows slower than the dimension of the signal, we obtain the detection boundary for this problem and prove its rate-optimality

    Pointwise adaptive estimation for robust and quantile regression

    Full text link
    A nonparametric procedure for robust regression estimation and for quantile regression is proposed which is completely data-driven and adapts locally to the regularity of the regression function. This is achieved by considering in each point M-estimators over different local neighbourhoods and by a local model selection procedure based on sequential testing. Non-asymptotic risk bounds are obtained, which yield rate-optimality for large sample asymptotics under weak conditions. Simulations for different univariate median regression models show good finite sample properties, also in comparison to traditional methods. The approach is extended to image denoising and applied to CT scans in cancer research

    Spatial aggregation of local likelihood estimates with applications to classification

    Get PDF
    This paper presents a new method for spatially adaptive local (constant) likelihood estimation which applies to a broad class of nonparametric models, including the Gaussian, Poisson and binary response models. The main idea of the method is, given a sequence of local likelihood estimates (``weak'' estimates), to construct a new aggregated estimate whose pointwise risk is of order of the smallest risk among all ``weak'' estimates. We also propose a new approach toward selecting the parameters of the procedure by providing the prescribed behavior of the resulting estimate in the simple parametric situation. We establish a number of important theoretical results concerning the optimality of the aggregated estimate. In particular, our ``oracle'' result claims that its risk is, up to some logarithmic multiplier, equal to the smallest risk for the given family of estimates. The performance of the procedure is illustrated by application to the classification problem. A numerical study demonstrates its reasonable performance in simulated and real-life examples.Comment: Published in at http://dx.doi.org/10.1214/009053607000000271 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore