21,464 research outputs found
Grid structure impact in sparse point representation of derivatives
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions
Recommended from our members
TVL<sub>1</sub>shape approximation from scattered 3D data
With the emergence in 3D sensors such as laser scanners and 3D reconstruction from cameras, large 3D point clouds can now be sampled from physical objects within a scene. The raw 3D samples delivered by these sensors however, contain only a limited degree of information about the environment the objects exist in, which means that further geometrical high-level modelling is essential. In addition, issues like sparse data measurements, noise, missing samples due to occlusion, and the inherently huge datasets involved in such representations makes this task extremely challenging. This paper addresses these issues by presenting a new 3D shape modelling framework for samples acquired from 3D sensor. Motivated by the success of nonlinear kernel-based approximation techniques in the statistics domain, existing methods using radial basis functions are applied to 3D object shape approximation. The task is framed as an optimization problem and is extended using non-smooth L1 total variation regularization. Appropriate convex energy functionals are constructed and solved by applying the Alternating Direction Method of Multipliers approach, which is then extended using Gauss-Seidel iterations. This significantly lowers the computational complexity involved in generating 3D shape from 3D samples, while both numerical and qualitative analysis confirms the superior shape modelling performance of this new framework compared with existing 3D shape reconstruction techniques
Practical recommendations for gradient-based training of deep architectures
Learning algorithms related to artificial neural networks and in particular
for Deep Learning may seem to involve many bells and whistles, called
hyper-parameters. This chapter is meant as a practical guide with
recommendations for some of the most commonly used hyper-parameters, in
particular in the context of learning algorithms based on back-propagated
gradient and gradient-based optimization. It also discusses how to deal with
the fact that more interesting results can be obtained when allowing one to
adjust many hyper-parameters. Overall, it describes elements of the practice
used to successfully and efficiently train and debug large-scale and often deep
multi-layer neural networks. It closes with open questions about the training
difficulties observed with deeper architectures
USLV: Unspanned Stochastic Local Volatility Model
We propose a new framework for modeling stochastic local volatility, with
potential applications to modeling derivatives on interest rates, commodities,
credit, equity, FX etc., as well as hybrid derivatives. Our model extends the
linearity-generating unspanned volatility term structure model by Carr et al.
(2011) by adding a local volatility layer to it. We outline efficient numerical
schemes for pricing derivatives in this framework for a particular four-factor
specification (two "curve" factors plus two "volatility" factors). We show that
the dynamics of such a system can be approximated by a Markov chain on a
two-dimensional space (Z_t,Y_t), where coordinates Z_t and Y_t are given by
direct (Kroneker) products of values of pairs of curve and volatility factors,
respectively. The resulting Markov chain dynamics on such partly "folded" state
space enables fast pricing by the standard backward induction. Using a
nonparametric specification of the Markov chain generator, one can accurately
match arbitrary sets of vanilla option quotes with different strikes and
maturities. Furthermore, we consider an alternative formulation of the model in
terms of an implied time change process. The latter is specified
nonparametrically, again enabling accurate calibration to arbitrary sets of
vanilla option quotes.Comment: Sections 3.2 and 3.3 are re-written, 3 figures adde
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
- …