3,827 research outputs found

    Constrained General Regression in Pseudo-Sobolev Spaces with Application to Option Pricing

    Get PDF
    State price density (SPD) contains important information concerning market expectations. In existing literature, a constrained estimator of the SPD is found by nonlinear least squares in a suitable Sobolev space. We improve the behavior of this estimator by implementing a covariance structure taking into account the time of the trade and by considering simultaneously both the observed Put and Call option prices.isotonic regression, Sobolev spaces, monotonicity, multiple observations, covariance structure, option price

    Stochastic expansions using continuous dictionaries: L\'{e}vy adaptive regression kernels

    Get PDF
    This article describes a new class of prior distributions for nonparametric function estimation. The unknown function is modeled as a limit of weighted sums of kernels or generator functions indexed by continuous parameters that control local and global features such as their translation, dilation, modulation and shape. L\'{e}vy random fields and their stochastic integrals are employed to induce prior distributions for the unknown functions or, equivalently, for the number of kernels and for the parameters governing their features. Scaling, shape, and other features of the generating functions are location-specific to allow quite different function properties in different parts of the space, as with wavelet bases and other methods employing overcomplete dictionaries. We provide conditions under which the stochastic expansions converge in specified Besov or Sobolev norms. Under a Gaussian error model, this may be viewed as a sparse regression problem, with regularization induced via the L\'{e}vy random field prior distribution. Posterior inference for the unknown functions is based on a reversible jump Markov chain Monte Carlo algorithm. We compare the L\'{e}vy Adaptive Regression Kernel (LARK) method to wavelet-based methods using some of the standard test functions, and illustrate its flexibility and adaptability in nonstationary applications.Comment: Published in at http://dx.doi.org/10.1214/11-AOS889 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A sparse approach for high-dimensional data with heavy-tailed noise

    Get PDF
    High-dimensional data have commonly emerged in diverse fields, such as economics, finance, genetics, medicine, machine learning, and so on. In this paper, we consider the sparse quantile regression problem of high-dimensional data with heavy-tailed noise, especially when the number of regressors is much larger than the sample size. We bring the spirit of Lp-norm support vector regression into quantile regression and propose a robust Lp-norm support vector quantile regression for high-dimensional data with heavy-tailed noise. The proposed method achieves robustness against heavy-tailed noise due to its use of the pinball loss function. Furthermore, Lp-norm support vector quantile regression ensures that the most representative variables are selected automatically by using a sparse parameter. We use a simulation study to test the variable selection performance of Lp-norm support vector quantile regression, where the number of explanatory variables greatly exceeds the sample size. The simulation study confirms that Lp-norm support vector quantile regression is not only robust against heavy-tailed noise but also selects representative variables. We further apply the proposed method to solve the variable selection problem of index construction, which also confirms the robustness and sparseness of Lp-norm support vector quantile regression

    Does generalization performance of lql^q regularization learning depend on qq? A negative example

    Full text link
    lql^q-regularization has been demonstrated to be an attractive technique in machine learning and statistical modeling. It attempts to improve the generalization (prediction) capability of a machine (model) through appropriately shrinking its coefficients. The shape of a lql^q estimator differs in varying choices of the regularization order qq. In particular, l1l^1 leads to the LASSO estimate, while l2l^{2} corresponds to the smooth ridge regression. This makes the order qq a potential tuning parameter in applications. To facilitate the use of lql^{q}-regularization, we intend to seek for a modeling strategy where an elaborative selection on qq is avoidable. In this spirit, we place our investigation within a general framework of lql^{q}-regularized kernel learning under a sample dependent hypothesis space (SDHS). For a designated class of kernel functions, we show that all lql^{q} estimators for 0<q<0< q < \infty attain similar generalization error bounds. These estimated bounds are almost optimal in the sense that up to a logarithmic factor, the upper and lower bounds are asymptotically identical. This finding tentatively reveals that, in some modeling contexts, the choice of qq might not have a strong impact in terms of the generalization capability. From this perspective, qq can be arbitrarily specified, or specified merely by other no generalization criteria like smoothness, computational complexity, sparsity, etc..Comment: 35 pages, 3 figure

    The transmission of monetary policy shocks

    Get PDF
    Commonly used instruments for the identification of monetary policy disturbances are likely to combine the true policy shock with information about the state of the economy due to the information disclosed through the policy action. We show that this signalling effect of monetary policy can give rise to the empirical puzzles reported in the literature, and propose a new high-frequency instrument for monetary policy shocks that accounts for informational rigidities. We find that a monetary tightening is unequivocally contractionary, with deterioration of domestic demand, labor and credit market conditions, as well as of asset prices and agents' expectations

    Universal discrete-time reservoir computers with stochastic inputs and linear readouts using non-homogeneous state-affine systems

    Full text link
    A new class of non-homogeneous state-affine systems is introduced for use in reservoir computing. Sufficient conditions are identified that guarantee first, that the associated reservoir computers with linear readouts are causal, time-invariant, and satisfy the fading memory property and second, that a subset of this class is universal in the category of fading memory filters with stochastic almost surely uniformly bounded inputs. This means that any discrete-time filter that satisfies the fading memory property with random inputs of that type can be uniformly approximated by elements in the non-homogeneous state-affine family.Comment: 41 page

    Predicting extreme VaR: Nonparametric quantile regression with refinements from extreme value theory

    Get PDF
    This paper studies the performance of nonparametric quantile regression as a tool to predict Value at Risk (VaR). The approach is flexible as it requires no assumptions on the form of return distributions. A monotonized double kernel local linear estimator is applied to estimate moderate (1%) conditional quantiles of index return distributions. For extreme (0.1%) quantiles, where particularly few data points are available, we propose to combine nonparametric quantile regression with extreme value theory. The out-of-sample forecasting performance of our methods turns out to be clearly superior to different specifications of the Conditionally Autoregressive VaR (CAViaR) models.Value at Risk, nonparametric quantile regression, risk management, extreme value theory, monotonization, CAViaR
    corecore