231,238 research outputs found

    On the data-driven COS method

    Get PDF
    In this paper, we present the data-driven COS method, ddCOS. It is a Fourier-based finan- cial option valuation method which assumes the availability of asset data samples: a char- acteristic function of the underlying asset probability density function is not required. As such, the presented technique represents a generalization of the well-known COS method [1]. The convergence of the proposed method is O(1 / √ n ) , in line with Monte Carlo meth- ods for pricing financial derivatives. The ddCOS method is then particularly interesting for density recovery and also for the efficient computation of the option’s sensitivities Delta and Gamma. These are often used in risk management, and can be obtained at a higher accuracy with ddCOS than with plain Monte Carlo methods

    On the data-driven COS method

    Get PDF
    In this paper, we present the data-driven COS method, ddCOS. It is a Fourier-based financial option valuation method which assumes the availability of asset data samples: a characteristic function of the underlying asset probability density function is not required. As such, the presented technique represents a generalization of the well-known COS method [1]. The convergence of the proposed method is in line with Monte Carlo methods for pricing financial derivatives. The ddCOS method is then particularly interesting for density recovery and also for the efficient computation of the option's sensitivities Delta and Gamma. These are often used in risk management, and can be obtained at a higher accuracy with ddCOS than with plain Monte Carlo methods

    Pricing options and computing implied volatilities using neural networks

    Full text link
    This paper proposes a data-driven approach, by means of an Artificial Neural Network (ANN), to value financial options and to calculate implied volatilities with the aim of accelerating the corresponding numerical methods. With ANNs being universal function approximators, this method trains an optimized ANN on a data set generated by a sophisticated financial model, and runs the trained ANN as an agent of the original solver in a fast and efficient way. We test this approach on three different types of solvers, including the analytic solution for the Black-Scholes equation, the COS method for the Heston stochastic volatility model and Brent's iterative root-finding method for the calculation of implied volatilities. The numerical results show that the ANN solver can reduce the computing time significantly

    Sparse Time Frequency Representations and Dynamical Systems

    Get PDF
    In this paper, we establish a connection between the recently developed data-driven time-frequency analysis [T.Y. Hou and Z. Shi, Advances in Adaptive Data Analysis, 3, 1–28, 2011], [T.Y. Hou and Z. Shi, Applied and Comput. Harmonic Analysis, 35, 284–308, 2013] and the classical second order differential equations. The main idea of the data-driven time-frequency analysis is to decompose a multiscale signal into the sparsest collection of Intrinsic Mode Functions (IMFs) over the largest possible dictionary via nonlinear optimization. These IMFs are of the form a(t)cos(θ(t)), where the amplitude a(t) is positive and slowly varying. The non-decreasing phase function θ(t) is determined by the data and in general depends on the signal in a nonlinear fashion. One of the main results of this paper is that we show that each IMF can be associated with a solution of a second order ordinary differential equation of the form x+p(x,t)x+q(x,t)=0. Further, we propose a localized variational formulation for this problem and develop an effective l1-based optimization method to recover p(x,t) and q(x,t) by looking for a sparse representation of p and q in terms of the polynomial basis. Depending on the form of nonlinearity in p(x,t) and q(x,t), we can define the order of nonlinearity for the associated IMF. This generalizes a concept recently introduced by Prof. N. E. Huang et al. [N.E. Huang, M.-T. Lo, Z. Wu, and Xianyao Chen, US Patent filling number 12/241.565, Sept. 2011]. Numerical examples will be provided to illustrate the robustness and stability of the proposed method for data with or without noise. This manuscript should be considered as a proof of concept

    Data-Driven Time-Frequency Analysis

    Get PDF
    In this paper, we introduce a new adaptive data analysis method to study trend and instantaneous frequency of nonlinear and non-stationary data. This method is inspired by the Empirical Mode Decomposition method (EMD) and the recently developed compressed (compressive) sensing theory. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode functions of the form {a(t)cos⁡(θ(t))}\{a(t) \cos(\theta(t))\}, where a∈V(θ)a \in V(\theta), V(θ)V(\theta) consists of the functions smoother than cos⁡(θ(t))\cos(\theta(t)) and θ′≥0\theta'\ge 0. This problem can be formulated as a nonlinear L0L^0 optimization problem. In order to solve this optimization problem, we propose a nonlinear matching pursuit method by generalizing the classical matching pursuit for the L0L^0 optimization problem. One important advantage of this nonlinear matching pursuit method is it can be implemented very efficiently and is very stable to noise. Further, we provide a convergence analysis of our nonlinear matching pursuit method under certain scale separation assumptions. Extensive numerical examples will be given to demonstrate the robustness of our method and comparison will be made with the EMD/EEMD method. We also apply our method to study data without scale separation, data with intra-wave frequency modulation, and data with incomplete or under-sampled data

    A sparsity-driven approach for joint SAR imaging and phase error correction

    Get PDF
    Image formation algorithms in a variety of applications have explicit or implicit dependence on a mathematical model of the observation process. Inaccuracies in the observation model may cause various degradations and artifacts in the reconstructed images. The application of interest in this paper is synthetic aperture radar (SAR) imaging, which particularly suffers from motion-induced model errors. These types of errors result in phase errors in SAR data which cause defocusing of the reconstructed images. Particularly focusing on imaging of fields that admit a sparse representation, we propose a sparsity-driven method for joint SAR imaging and phase error correction. Phase error correction is performed during the image formation process. The problem is set up as an optimization problem in a nonquadratic regularization-based framework. The method involves an iterative algorithm each iteration of which consists of consecutive steps of image formation and model error correction. Experimental results show the effectiveness of the approach for various types of phase errors, as well as the improvements it provides over existing techniques for model error compensation in SAR
    • …
    corecore