21,295 research outputs found
A one-way ANOVA test for functional data with graphical interpretation
A new functional ANOVA test, with a graphical interpretation of the result,
is presented. The test is an extension of the global envelope test introduced
by Myllymaki et al. (2017, Global envelope tests for spatial processes, J. R.
Statist. Soc. B 79, 381--404, doi: 10.1111/rssb.12172). The graphical
interpretation is realized by a global envelope which is drawn jointly for all
samples of functions. If a mean function computed from the empirical data is
out of the given envelope, the null hypothesis is rejected with the
predetermined significance level . The advantages of the proposed
one-way functional ANOVA are that it identifies the domains of the functions
which are responsible for the potential rejection. We introduce two versions of
this test: the first gives a graphical interpretation of the test results in
the original space of the functions and the second immediately offers a
post-hoc test by identifying the significant pair-wise differences between
groups. The proposed tests rely on discretization of the functions, therefore
the tests are also applicable in the multidimensional ANOVA problem. In the
empirical part of the article, we demonstrate the use of the method by
analyzing fiscal decentralization in European countries. The aim of the
empirical analysis is to capture differences between the levels of government
expenditure decentralization ratio among different groups of European
countries. The idea behind, based on the existing literature, is
straightforward: countries with a longer European integration history are
supposed to decentralize more of their government expenditure. We use the
government expenditure centralization ratios of 29 European Union and EFTA
countries in period from 1995 to 2016 sorted into three groups according to the
presumed level of European economic and political integration.Comment: arXiv admin note: text overlap with arXiv:1506.0164
Fast DD-classification of functional data
A fast nonparametric procedure for classifying functional data is introduced.
It consists of a two-step transformation of the original data plus a classifier
operating on a low-dimensional hypercube. The functional data are first mapped
into a finite-dimensional location-slope space and then transformed by a
multivariate depth function into the -plot, which is a subset of the unit
hypercube. This transformation yields a new notion of depth for functional
data. Three alternative depth functions are employed for this, as well as two
rules for the final classification on . The resulting classifier has
to be cross-validated over a small range of parameters only, which is
restricted by a Vapnik-Cervonenkis bound. The entire methodology does not
involve smoothing techniques, is completely nonparametric and allows to achieve
Bayes optimality under standard distributional settings. It is robust,
efficiently computable, and has been implemented in an R environment.
Applicability of the new approach is demonstrated by simulations as well as a
benchmark study
Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors
Sparsity has become a key concept for solving of high-dimensional inverse
problems using variational regularization techniques. Recently, using similar
sparsity-constraints in the Bayesian framework for inverse problems by encoding
them in the prior distribution has attracted attention. Important questions
about the relation between regularization theory and Bayesian inference still
need to be addressed when using sparsity promoting inversion. A practical
obstacle for these examinations is the lack of fast posterior sampling
algorithms for sparse, high-dimensional Bayesian inversion: Accessing the full
range of Bayesian inference methods requires being able to draw samples from
the posterior probability distribution in a fast and efficient way. This is
usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this
article, we develop and examine a new implementation of a single component
Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that
the efficiency of our Gibbs sampler increases when the level of sparsity or the
dimension of the unknowns is increased. This property is contrary to the
properties of the most commonly applied Metropolis-Hastings (MH) sampling
schemes: We demonstrate that the efficiency of MH schemes for L1-type priors
dramatically decreases when the level of sparsity or the dimension of the
unknowns is increased. Practically, Bayesian inversion for L1-type priors using
MH samplers is not feasible at all. As this is commonly believed to be an
intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also
challenges common beliefs about the applicability of sample based Bayesian
inference.Comment: 33 pages, 14 figure
The GNAT method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows
The Gauss--Newton with approximated tensors (GNAT) method is a nonlinear
model reduction method that operates on fully discretized computational models.
It achieves dimension reduction by a Petrov--Galerkin projection associated
with residual minimization; it delivers computational efficency by a
hyper-reduction procedure based on the `gappy POD' technique. Originally
presented in Ref. [1], where it was applied to implicit nonlinear
structural-dynamics models, this method is further developed here and applied
to the solution of a benchmark turbulent viscous flow problem. To begin, this
paper develops global state-space error bounds that justify the method's design
and highlight its advantages in terms of minimizing components of these error
bounds. Next, the paper introduces a `sample mesh' concept that enables a
distributed, computationally efficient implementation of the GNAT method in
finite-volume-based computational-fluid-dynamics (CFD) codes. The suitability
of GNAT for parameterized problems is highlighted with the solution of an
academic problem featuring moving discontinuities. Finally, the capability of
this method to reduce by orders of magnitude the core-hours required for
large-scale CFD computations, while preserving accuracy, is demonstrated with
the simulation of turbulent flow over the Ahmed body. For an instance of this
benchmark problem with over 17 million degrees of freedom, GNAT outperforms
several other nonlinear model-reduction methods, reduces the required
computational resources by more than two orders of magnitude, and delivers a
solution that differs by less than 1% from its high-dimensional counterpart
- …