2,761 research outputs found

    Efficient Proactive Caching for Supporting Seamless Mobility

    Full text link
    We present a distributed proactive caching approach that exploits user mobility information to decide where to proactively cache data to support seamless mobility, while efficiently utilizing cache storage using a congestion pricing scheme. The proposed approach is applicable to the case where objects have different sizes and to a two-level cache hierarchy, for both of which the proactive caching problem is hard. Additionally, our modeling framework considers the case where the delay is independent of the requested data object size and the case where the delay is a function of the object size. Our evaluation results show how various system parameters influence the delay gains of the proposed approach, which achieves robust and good performance relative to an oracle and an optimal scheme for a flat cache structure.Comment: 10 pages, 9 figure

    Evaluating Multicore Algorithms on the Unified Memory Model

    Get PDF
    One of the challenges to achieving good performance on multicore architectures is the effective utilization of the underlying memory hierarchy. While this is an issue for single-core architectures, it is a critical problem for multicore chips. In this paper, we formulate the unified multicore model (UMM) to help understand the fundamental limits on cache performance on these architectures. The UMM seamlessly handles different types of multiple-core processors with varying degrees of cache sharing at different levels. We demonstrate that our model can be used to study a variety of multicore architectures on a variety of applications. In particular, we use it to analyze an option pricing problem using the trinomial model and develop an algorithm for it that has near-optimal memory traffic between cache levels. We have implemented the algorithm on a two Quad-Core Intel Xeon 5310 1.6 GHz processors (8 cores). It achieves a peak performance of 19.5 GFLOPs, which is 38% of the theoretical peak of the multicore system. We demonstrate that our algorithm outperforms compiler-optimized and auto-parallelized code by a factor of up to 7.5

    Evaluating Multicore Algorithms on the Unified Memory Model

    Get PDF

    Accelerating the calibration of stochastic volatility models

    Get PDF
    This paper compares the performance of three methods for pricing vanilla options in models with known characteristic function: (1) Direct integration, (2) Fast Fourier Transform (FFT), (3) Fractional FFT. The most important application of this comparison is the choice of the fastest method for the calibration of stochastic volatility models, e.g. Heston, Bates, Barndor®-Nielsen-Shephard models or Levy models with stochastic time. We show that using additional cache technique makes the calibration with the direct integration method at least seven times faster than the calibration with the fractional FFT method.Stochastic Volatility Models; Calibration; Numerical Integration; Fast Fourier Transform

    Accelerating the calibration of stochastic volatility models

    Get PDF
    This paper compares the performance of three methods for pricing vanilla options in models with known characteristic function: (1) Direct integration, (2) Fast Fourier Transform (FFT), (3) Fractional FFT. The most important application of this comparison is the choice of the fastest method for the calibration of stochastic volatility models, e.g. Heston, Bates, Barndorff-Nielsen-Shephard models or Levy models with stochastic time. We show that using additional cache technique makes the calibration with the direct integration method at least seven times faster than the calibration with the fractional FFT method. --Stochastic Volatility Models,Calibration,Numerical Integration,Fast Fourier Transform

    Pricing options and computing implied volatilities using neural networks

    Full text link
    This paper proposes a data-driven approach, by means of an Artificial Neural Network (ANN), to value financial options and to calculate implied volatilities with the aim of accelerating the corresponding numerical methods. With ANNs being universal function approximators, this method trains an optimized ANN on a data set generated by a sophisticated financial model, and runs the trained ANN as an agent of the original solver in a fast and efficient way. We test this approach on three different types of solvers, including the analytic solution for the Black-Scholes equation, the COS method for the Heston stochastic volatility model and Brent's iterative root-finding method for the calculation of implied volatilities. The numerical results show that the ANN solver can reduce the computing time significantly

    A neural network-based framework for financial model calibration

    Full text link
    A data-driven approach called CaNN (Calibration Neural Network) is proposed to calibrate financial asset price models using an Artificial Neural Network (ANN). Determining optimal values of the model parameters is formulated as training hidden neurons within a machine learning framework, based on available financial option prices. The framework consists of two parts: a forward pass in which we train the weights of the ANN off-line, valuing options under many different asset model parameter settings; and a backward pass, in which we evaluate the trained ANN-solver on-line, aiming to find the weights of the neurons in the input layer. The rapid on-line learning of implied volatility by ANNs, in combination with the use of an adapted parallel global optimization method, tackles the computation bottleneck and provides a fast and reliable technique for calibrating model parameters while avoiding, as much as possible, getting stuck in local minima. Numerical experiments confirm that this machine-learning framework can be employed to calibrate parameters of high-dimensional stochastic volatility models efficiently and accurately.Comment: 34 pages, 9 figures, 11 table
    corecore