148,259 research outputs found
Analytical Solution for Wave Propagation in Stratified Acoustic/Porous Media. Part II: the 3D Case
We are interested in the modeling of wave propagation in an infinite
bilayered acoustic/poroelastic media. We consider the biphasic Biot's model in
the poroelastic layer. The first part is devoted to the calculation of
analytical solution in two dimensions, thanks to Cagniard de Hoop method. In
this second part we consider the 3D case
Optimal projection of observations in a Bayesian setting
Optimal dimensionality reduction methods are proposed for the Bayesian
inference of a Gaussian linear model with additive noise in presence of
overabundant data. Three different optimal projections of the observations are
proposed based on information theory: the projection that minimizes the
Kullback-Leibler divergence between the posterior distributions of the original
and the projected models, the one that minimizes the expected Kullback-Leibler
divergence between the same distributions, and the one that maximizes the
mutual information between the parameter of interest and the projected
observations. The first two optimization problems are formulated as the
determination of an optimal subspace and therefore the solution is computed
using Riemannian optimization algorithms on the Grassmann manifold. Regarding
the maximization of the mutual information, it is shown that there exists an
optimal subspace that minimizes the entropy of the posterior distribution of
the reduced model; a basis of the subspace can be computed as the solution to a
generalized eigenvalue problem; an a priori error estimate on the mutual
information is available for this particular solution; and that the
dimensionality of the subspace to exactly conserve the mutual information
between the input and the output of the models is less than the number of
parameters to be inferred. Numerical applications to linear and nonlinear
models are used to assess the efficiency of the proposed approaches, and to
highlight their advantages compared to standard approaches based on the
principal component analysis of the observations
Decidability of the Monadic Shallow Linear First-Order Fragment with Straight Dismatching Constraints
The monadic shallow linear Horn fragment is well-known to be decidable and
has many application, e.g., in security protocol analysis, tree automata, or
abstraction refinement. It was a long standing open problem how to extend the
fragment to the non-Horn case, preserving decidability, that would, e.g.,
enable to express non-determinism in protocols. We prove decidability of the
non-Horn monadic shallow linear fragment via ordered resolution further
extended with dismatching constraints and discuss some applications of the new
decidable fragment.Comment: 29 pages, long version of CADE-26 pape
Convex and non-convex regularization methods for spatial point processes intensity estimation
This paper deals with feature selection procedures for spatial point
processes intensity estimation. We consider regularized versions of estimating
equations based on Campbell theorem derived from two classical functions:
Poisson likelihood and logistic regression likelihood. We provide general
conditions on the spatial point processes and on penalty functions which ensure
consistency, sparsity and asymptotic normality. We discuss the numerical
implementation and assess finite sample properties in a simulation study.
Finally, an application to tropical forestry datasets illustrates the use of
the proposed methods
Constructing IGA-suitable planar parameterization from complex CAD boundary by domain partition and global/local optimization
In this paper, we propose a general framework for constructing IGA-suitable
planar B-spline parameterizations from given complex CAD boundaries consisting
of a set of B-spline curves. Instead of forming the computational domain by a
simple boundary, planar domains with high genus and more complex boundary
curves are considered. Firstly, some pre-processing operations including
B\'ezier extraction and subdivision are performed on each boundary curve in
order to generate a high-quality planar parameterization; then a robust planar
domain partition framework is proposed to construct high-quality patch-meshing
results with few singularities from the discrete boundary formed by connecting
the end points of the resulting boundary segments. After the topology
information generation of quadrilateral decomposition, the optimal placement of
interior B\'ezier curves corresponding to the interior edges of the
quadrangulation is constructed by a global optimization method to achieve a
patch-partition with high quality. Finally, after the imposition of
C1=G1-continuity constraints on the interface of neighboring B\'ezier patches
with respect to each quad in the quadrangulation, the high-quality B\'ezier
patch parameterization is obtained by a C1-constrained local optimization
method to achieve uniform and orthogonal iso-parametric structures while
keeping the continuity conditions between patches. The efficiency and
robustness of the proposed method are demonstrated by several examples which
are compared to results obtained by the skeleton-based parameterization
approach
Predicting Deeper into the Future of Semantic Segmentation
The ability to predict and therefore to anticipate the future is an important
attribute of intelligence. It is also of utmost importance in real-time
systems, e.g. in robotics or autonomous driving, which depend on visual scene
understanding for decision making. While prediction of the raw RGB pixel values
in future video frames has been studied in previous work, here we introduce the
novel task of predicting semantic segmentations of future frames. Given a
sequence of video frames, our goal is to predict segmentation maps of not yet
observed video frames that lie up to a second or further in the future. We
develop an autoregressive convolutional neural network that learns to
iteratively generate multiple frames. Our results on the Cityscapes dataset
show that directly predicting future segmentations is substantially better than
predicting and then segmenting future RGB frames. Prediction results up to half
a second in the future are visually convincing and are much more accurate than
those of a baseline based on warping semantic segmentations using optical flow.Comment: Accepted to ICCV 2017. Supplementary material available on the
authors' webpage
Design and Analysis of a Task-based Parallelization over a Runtime System of an Explicit Finite-Volume CFD Code with Adaptive Time Stepping
FLUSEPA (Registered trademark in France No. 134009261) is an advanced
simulation tool which performs a large panel of aerodynamic studies. It is the
unstructured finite-volume solver developed by Airbus Safran Launchers company
to calculate compressible, multidimensional, unsteady, viscous and reactive
flows around bodies in relative motion. The time integration in FLUSEPA is done
using an explicit temporal adaptive method. The current production version of
the code is based on MPI and OpenMP. This implementation leads to important
synchronizations that must be reduced. To tackle this problem, we present the
study of a task-based parallelization of the aerodynamic solver of FLUSEPA
using the runtime system StarPU and combining up to three levels of
parallelism. We validate our solution by the simulation (using a finite-volume
mesh with 80 million cells) of a take-off blast wave propagation for Ariane 5
launcher.Comment: Accepted manuscript of a paper in Journal of Computational Scienc
Time-Optimal Trajectories of Generic Control-Affine Systems Have at Worst Iterated Fuller Singularities
We consider in this paper the regularity problem for time-optimal
trajectories of a single-input control-affine system on a n-dimensional
manifold. We prove that, under generic conditions on the drift and the
controlled vector field, any control u associated with an optimal trajectory is
smooth out of a countable set of times. More precisely, there exists an integer
K, only depending on the dimension n, such that the non-smoothness set of u is
made of isolated points, accumulations of isolated points, and so on up to K-th
order iterated accumulations
PAC-Bayes and Domain Adaptation
We provide two main contributions in PAC-Bayesian theory for domain
adaptation where the objective is to learn, from a source distribution, a
well-performing majority vote on a different, but related, target distribution.
Firstly, we propose an improvement of the previous approach we proposed in
Germain et al. (2013), which relies on a novel distribution pseudodistance
based on a disagreement averaging, allowing us to derive a new tighter domain
adaptation bound for the target risk. While this bound stands in the spirit of
common domain adaptation works, we derive a second bound (introduced in Germain
et al., 2016) that brings a new perspective on domain adaptation by deriving an
upper bound on the target risk where the distributions' divergence-expressed as
a ratio-controls the trade-off between a source error measure and the target
voters' disagreement. We discuss and compare both results, from which we obtain
PAC-Bayesian generalization bounds. Furthermore, from the PAC-Bayesian
specialization to linear classifiers, we infer two learning algorithms, and we
evaluate them on real data.Comment: Neurocomputing, Elsevier, 2019. arXiv admin note: substantial text
overlap with arXiv:1503.0694
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization
Due to their simplicity and excellent performance, parallel asynchronous
variants of stochastic gradient descent have become popular methods to solve a
wide range of large-scale optimization problems on multi-core architectures.
Yet, despite their practical success, support for nonsmooth objectives is still
lacking, making them unsuitable for many problems of interest in machine
learning, such as the Lasso, group Lasso or empirical risk minimization with
convex constraints.
In this work, we propose and analyze ProxASAGA, a fully asynchronous sparse
method inspired by SAGA, a variance reduced incremental gradient algorithm. The
proposed method is easy to implement and significantly outperforms the state of
the art on several nonsmooth, large-scale problems. We prove that our method
achieves a theoretical linear speedup with respect to the sequential version
under assumptions on the sparsity of gradients and block-separability of the
proximal term. Empirical benchmarks on a multi-core architecture illustrate
practical speedups of up to 12x on a 20-core machine.Comment: Appears in Advances in Neural Information Processing Systems 30 (NIPS
2017), 28 page
- …