141,300 research outputs found
The explicit Laplace transform for the Wishart process
We derive the explicit formula for the joint Laplace transform of the Wishart
process and its time integral which extends the original approach of Bru. We
compare our methodology with the alternative results given by the variation of
constants method, the linearization of the Matrix Riccati ODE's and the
Runge-Kutta algorithm. The new formula turns out to be fast and accurate.Comment: Accepted on: Journal of Applied Probability 51(3), 201
A Dilated Inception Network for Visual Saliency Prediction
Recently, with the advent of deep convolutional neural networks (DCNN), the
improvements in visual saliency prediction research are impressive. One
possible direction to approach the next improvement is to fully characterize
the multi-scale saliency-influential factors with a computationally-friendly
module in DCNN architectures. In this work, we proposed an end-to-end dilated
inception network (DINet) for visual saliency prediction. It captures
multi-scale contextual features effectively with very limited extra parameters.
Instead of utilizing parallel standard convolutions with different kernel sizes
as the existing inception module, our proposed dilated inception module (DIM)
uses parallel dilated convolutions with different dilation rates which can
significantly reduce the computation load while enriching the diversity of
receptive fields in feature maps. Moreover, the performance of our saliency
model is further improved by using a set of linear normalization-based
probability distribution distance metrics as loss functions. As such, we can
formulate saliency prediction as a probability distribution prediction task for
global saliency inference instead of a typical pixel-wise regression problem.
Experimental results on several challenging saliency benchmark datasets
demonstrate that our DINet with proposed loss functions can achieve
state-of-the-art performance with shorter inference time.Comment: Accepted by IEEE Transactions on Multimedia. The source codes are
available at https://github.com/ysyscool/DINe
Asymptotic analysis of forward performance processes in incomplete markets and their ill-posed HJB equations
We consider the problem of optimal portfolio selection under forward
investment performance criteria in an incomplete market. The dynamics of the
prices of the traded assets depend on a pair of stochastic factors, namely, a
slow factor (e.g. a macroeconomic indicator) and a fast factor (e.g. stochastic
volatility). We analyze the associated forward performance SPDE and provide
explicit formulae for the leading order and first order correction terms for
the forward investment process and the optimal feedback portfolios. They both
depend on the investor's initial preferences and the dynamically changing
investment opportunities. The leading order terms resemble their time-monotone
counterparts, but with the appropriate stochastic time changes resulting from
averaging phenomena. The first-order terms compile the reaction of the investor
to both the changes in the market input and his recent performance. Our
analysis is based on an expansion of the underlying ill-posed HJB equation, and
it is justified by means of an appropriate remainder estimate.Comment: 26 page
Smoothing the payoff for efficient computation of Basket option prices
We consider the problem of pricing basket options in a multivariate Black
Scholes or Variance Gamma model. From a numerical point of view, pricing such
options corresponds to moderate and high dimensional numerical integration
problems with non-smooth integrands. Due to this lack of regularity, higher
order numerical integration techniques may not be directly available, requiring
the use of methods like Monte Carlo specifically designed to work for
non-regular problems. We propose to use the inherent smoothing property of the
density of the underlying in the above models to mollify the payoff function by
means of an exact conditional expectation. The resulting conditional
expectation is unbiased and yields a smooth integrand, which is amenable to the
efficient use of adaptive sparse grid cubature. Numerical examples indicate
that the high-order method may perform orders of magnitude faster compared to
Monte Carlo or Quasi Monte Carlo in dimensions up to 35
- …