2,013 research outputs found
B-spline techniques for volatility modeling
This paper is devoted to the application of B-splines to volatility modeling,
specifically the calibration of the leverage function in stochastic local
volatility models and the parameterization of an arbitrage-free implied
volatility surface calibrated to sparse option data. We use an extension of
classical B-splines obtained by including basis functions with infinite
support. We first come back to the application of shape-constrained B-splines
to the estimation of conditional expectations, not merely from a scatter plot
but also from the given marginal distributions. An application is the Monte
Carlo calibration of stochastic local volatility models by Markov projection.
Then we present a new technique for the calibration of an implied volatility
surface to sparse option data. We use a B-spline parameterization of the
Radon-Nikodym derivative of the underlying's risk-neutral probability density
with respect to a roughly calibrated base model. We show that this method
provides smooth arbitrage-free implied volatility surfaces. Finally, we sketch
a Galerkin method with B-spline finite elements to the solution of the partial
differential equation satisfied by the Radon-Nikodym derivative.Comment: 25 page
A self-calibration approach for optical long baseline interferometry imaging
Current optical interferometers are affected by unknown turbulent phases on
each telescope. In the field of radio-interferometry, the self-calibration
technique is a powerful tool to process interferometric data with missing phase
information. This paper intends to revisit the application of self-calibration
to Optical Long Baseline Interferometry (OLBI). We cast rigorously the OLBI
data processing problem into the self-calibration framework and demonstrate the
efficiency of the method on real astronomical OLBI dataset
A Framework for Fast Image Deconvolution with Incomplete Observations
In image deconvolution problems, the diagonalization of the underlying
operators by means of the FFT usually yields very large speedups. When there
are incomplete observations (e.g., in the case of unknown boundaries), standard
deconvolution techniques normally involve non-diagonalizable operators,
resulting in rather slow methods, or, otherwise, use inexact convolution
models, resulting in the occurrence of artifacts in the enhanced images. In
this paper, we propose a new deconvolution framework for images with incomplete
observations that allows us to work with diagonalized convolution operators,
and therefore is very fast. We iteratively alternate the estimation of the
unknown pixels and of the deconvolved image, using, e.g., an FFT-based
deconvolution method. This framework is an efficient, high-quality alternative
to existing methods of dealing with the image boundaries, such as edge
tapering. It can be used with any fast deconvolution method. We give an example
in which a state-of-the-art method that assumes periodic boundary conditions is
extended, through the use of this framework, to unknown boundary conditions.
Furthermore, we propose a specific implementation of this framework, based on
the alternating direction method of multipliers (ADMM). We provide a proof of
convergence for the resulting algorithm, which can be seen as a "partial" ADMM,
in which not all variables are dualized. We report experimental comparisons
with other primal-dual methods, where the proposed one performed at the level
of the state of the art. Four different kinds of applications were tested in
the experiments: deconvolution, deconvolution with inpainting, superresolution,
and demosaicing, all with unknown boundaries.Comment: IEEE Trans. Image Process., to be published. 15 pages, 11 figures.
MATLAB code available at
https://github.com/alfaiate/DeconvolutionIncompleteOb
One-step estimator paths for concave regularization
The statistics literature of the past 15 years has established many favorable
properties for sparse diminishing-bias regularization: techniques which can
roughly be understood as providing estimation under penalty functions spanning
the range of concavity between and norms. However, lasso
-regularized estimation remains the standard tool for industrial `Big
Data' applications because of its minimal computational cost and the presence
of easy-to-apply rules for penalty selection. In response, this article
proposes a simple new algorithm framework that requires no more computation
than a lasso path: the path of one-step estimators (POSE) does penalized
regression estimation on a grid of decreasing penalties, but adapts
coefficient-specific weights to decrease as a function of the coefficient
estimated in the previous path step. This provides sparse diminishing-bias
regularization at no extra cost over the fastest lasso algorithms. Moreover,
our `gamma lasso' implementation of POSE is accompanied by a reliable heuristic
for the fit degrees of freedom, so that standard information criteria can be
applied in penalty selection. We also provide novel results on the distance
between weighted- and penalized predictors; this allows us to build
intuition about POSE and other diminishing-bias regularization schemes. The
methods and results are illustrated in extensive simulations and in application
of logistic regression to evaluating the performance of hockey players.Comment: Data and code are in the gamlr package for R. Supplemental appendix
is at https://github.com/TaddyLab/pose/raw/master/paper/supplemental.pd
Multinomial Inverse Regression for Text Analysis
Text data, including speeches, stories, and other document forms, are often
connected to sentiment variables that are of interest for research in
marketing, economics, and elsewhere. It is also very high dimensional and
difficult to incorporate into statistical analyses. This article introduces a
straightforward framework of sentiment-preserving dimension reduction for text
data. Multinomial inverse regression is introduced as a general tool for
simplifying predictor sets that can be represented as draws from a multinomial
distribution, and we show that logistic regression of phrase counts onto
document annotations can be used to obtain low dimension document
representations that are rich in sentiment information. To facilitate this
modeling, a novel estimation technique is developed for multinomial logistic
regression with very high-dimension response. In particular, independent
Laplace priors with unknown variance are assigned to each regression
coefficient, and we detail an efficient routine for maximization of the joint
posterior over coefficients and their prior scale. This "gamma-lasso" scheme
yields stable and effective estimation for general high-dimension logistic
regression, and we argue that it will be superior to current methods in many
settings. Guidelines for prior specification are provided, algorithm
convergence is detailed, and estimator properties are outlined from the
perspective of the literature on non-concave likelihood penalization. Related
work on sentiment analysis from statistics, econometrics, and machine learning
is surveyed and connected. Finally, the methods are applied in two detailed
examples and we provide out-of-sample prediction studies to illustrate their
effectiveness.Comment: Published in the Journal of the American Statistical Association 108,
2013, with discussion (rejoinder is here: http://arxiv.org/abs/1304.4200).
Software is available in the textir package for
- …