26,249 research outputs found
Adaptive asynchronous time-stepping, stopping criteria, and a posteriori error estimates for fixed-stress iterative schemes for coupled poromechanics problems
In this paper we develop adaptive iterative coupling schemes for the Biot
system modeling coupled poromechanics problems. We particularly consider the
space-time formulation of the fixed-stress iterative scheme, in which we first
solve the problem of flow over the whole space-time interval, then exploiting
the space-time information for solving the mechanics. Two common
discretizations of this algorithm are then introduced based on two coupled
mixed finite element methods in-space and the backward Euler scheme in-time.
Therefrom, adaptive fixed-stress algorithms are build on conforming
reconstructions of the pressure and displacement together with equilibrated
flux and stresses reconstructions. These ingredients are used to derive a
posteriori error estimates for the fixed-stress algorithms, distinguishing the
different error components, namely the spatial discretization, the temporal
discretization, and the fixed-stress iteration components. Precisely, at the
iteration of the adaptive algorithm, we prove that our estimate gives
a guaranteed and fully computable upper bound on the energy-type error
measuring the difference between the exact and approximate pressure and
displacement. These error components are efficiently used to design adaptive
asynchronous time-stepping and adaptive stopping criteria for the fixed-stress
algorithms. Numerical experiments illustrate the efficiency of our estimates
and the performance of the adaptive iterative coupling algorithms
Foundational principles for large scale inference: Illustrations through correlation mining
When can reliable inference be drawn in the "Big Data" context? This paper
presents a framework for answering this fundamental question in the context of
correlation mining, with implications for general large scale inference. In
large scale data applications like genomics, connectomics, and eco-informatics
the dataset is often variable-rich but sample-starved: a regime where the
number of acquired samples (statistical replicates) is far fewer than the
number of observed variables (genes, neurons, voxels, or chemical
constituents). Much of recent work has focused on understanding the
computational complexity of proposed methods for "Big Data." Sample complexity
however has received relatively less attention, especially in the setting when
the sample size is fixed, and the dimension grows without bound. To
address this gap, we develop a unified statistical framework that explicitly
quantifies the sample complexity of various inferential tasks. Sampling regimes
can be divided into several categories: 1) the classical asymptotic regime
where the variable dimension is fixed and the sample size goes to infinity; 2)
the mixed asymptotic regime where both variable dimension and sample size go to
infinity at comparable rates; 3) the purely high dimensional asymptotic regime
where the variable dimension goes to infinity and the sample size is fixed.
Each regime has its niche but only the latter regime applies to exa-scale data
dimension. We illustrate this high dimensional framework for the problem of
correlation mining, where it is the matrix of pairwise and partial correlations
among the variables that are of interest. We demonstrate various regimes of
correlation mining based on the unifying perspective of high dimensional
learning rates and sample complexity for different structured covariance models
and different inference tasks
Dictionary-based Tensor Canonical Polyadic Decomposition
To ensure interpretability of extracted sources in tensor decomposition, we
introduce in this paper a dictionary-based tensor canonical polyadic
decomposition which enforces one factor to belong exactly to a known
dictionary. A new formulation of sparse coding is proposed which enables high
dimensional tensors dictionary-based canonical polyadic decomposition. The
benefits of using a dictionary in tensor decomposition models are explored both
in terms of parameter identifiability and estimation accuracy. Performances of
the proposed algorithms are evaluated on the decomposition of simulated data
and the unmixing of hyperspectral images
Block Coordinate Descent for Sparse NMF
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data
analysis. An important variant is the sparse NMF problem which arises when we
explicitly require the learnt features to be sparse. A natural measure of
sparsity is the L norm, however its optimization is NP-hard. Mixed norms,
such as L/L measure, have been shown to model sparsity robustly, based
on intuitive attributes that such measures need to satisfy. This is in contrast
to computationally cheaper alternatives such as the plain L norm. However,
present algorithms designed for optimizing the mixed norm L/L are slow
and other formulations for sparse NMF have been proposed such as those based on
L and L norms. Our proposed algorithm allows us to solve the mixed norm
sparsity constraints while not sacrificing computation time. We present
experimental evidence on real-world datasets that shows our new algorithm
performs an order of magnitude faster compared to the current state-of-the-art
solvers optimizing the mixed norm and is suitable for large-scale datasets
Substructured formulations of nonlinear structure problems - influence of the interface condition
We investigate the use of non-overlapping domain decomposition (DD) methods
for nonlinear structure problems. The classic techniques would combine a global
Newton solver with a linear DD solver for the tangent systems. We propose a
framework where we can swap Newton and DD, so that we solve independent
nonlinear problems for each substructure and linear condensed interface
problems. The objective is to decrease the number of communications between
subdomains and to improve parallelism. Depending on the interface condition, we
derive several formulations which are not equivalent, contrarily to the linear
case. Primal, dual and mixed variants are described and assessed on a simple
plasticity problem.Comment: in International Journal for Numerical Methods in Engineering, Wiley,
201
Penalized Likelihood and Bayesian Function Selection in Regression Models
Challenging research in various fields has driven a wide range of
methodological advances in variable selection for regression models with
high-dimensional predictors. In comparison, selection of nonlinear functions in
models with additive predictors has been considered only more recently. Several
competing suggestions have been developed at about the same time and often do
not refer to each other. This article provides a state-of-the-art review on
function selection, focusing on penalized likelihood and Bayesian concepts,
relating various approaches to each other in a unified framework. In an
empirical comparison, also including boosting, we evaluate several methods
through applications to simulated and real data, thereby providing some
guidance on their performance in practice
- …