7,300 research outputs found

    Review of modern numerical methods for a simple vanilla option pricing problem

    Get PDF
    Option pricing is a very attractive issue of financial engineering and optimization. The problem of determining the fair price of an option arises from the assumptions made under a given financial market model. The increasing complexity of these market assumptions contributes to the popularity of the numerical treatment of option valuation. Therefore, the pricing and hedging of plain vanilla options under the Black–Scholes model usually serve as a bench-mark for the development of new numerical pricing approaches and methods designed for advanced option pricing models. The objective of the paper is to present and compare the methodological concepts for the valuation of simple vanilla options using the relatively modern numerical techniques in this issue which arise from the discontinuous Galerkin method, the wavelet approach and the fuzzy transform technique. A theoretical comparison is accompanied by an empirical study based on the numerical verification of simple vanilla option prices. The resulting numerical schemes represent a particularly effective option pricing tool that enables some features of options that are depend-ent on the discretization of the computational domain as well as the order of the polynomial approximation to be captured better

    Physical states in the canonical tensor model from the perspective of random tensor networks

    Get PDF
    Tensor models, generalization of matrix models, are studied aiming for quantum gravity in dimensions larger than two. Among them, the canonical tensor model is formulated as a totally constrained system with first-class constraints, the algebra of which resembles the Dirac algebra of general relativity. When quantized, the physical states are defined to be vanished by the quantized constraints. In explicit representations, the constraint equations are a set of partial differential equations for the physical wave-functions, which do not seem straightforward to be solved due to their non-linear character. In this paper, after providing some explicit solutions for N=2,3N=2,3, we show that certain scale-free integration of partition functions of statistical systems on random networks (or random tensor networks more generally) provides a series of solutions for general NN. Then, by generalizing this form, we also obtain various solutions for general NN. Moreover, we show that the solutions for the cases with a cosmological constant can be obtained from those with no cosmological constant for increased NN. This would imply the interesting possibility that a cosmological constant can always be absorbed into the dynamics and is not an input parameter in the canonical tensor model. We also observe the possibility of symmetry enhancement in N=3N=3, and comment on an extension of Airy function related to the solutions.Comment: 41 pages, 1 figure; typos correcte

    Laplacian Mixture Modeling for Network Analysis and Unsupervised Learning on Graphs

    Full text link
    Laplacian mixture models identify overlapping regions of influence in unlabeled graph and network data in a scalable and computationally efficient way, yielding useful low-dimensional representations. By combining Laplacian eigenspace and finite mixture modeling methods, they provide probabilistic or fuzzy dimensionality reductions or domain decompositions for a variety of input data types, including mixture distributions, feature vectors, and graphs or networks. Provable optimal recovery using the algorithm is analytically shown for a nontrivial class of cluster graphs. Heuristic approximations for scalable high-performance implementations are described and empirically tested. Connections to PageRank and community detection in network analysis demonstrate the wide applicability of this approach. The origins of fuzzy spectral methods, beginning with generalized heat or diffusion equations in physics, are reviewed and summarized. Comparisons to other dimensionality reduction and clustering methods for challenging unsupervised machine learning problems are also discussed.Comment: 13 figures, 35 reference

    Trefftz Difference Schemes on Irregular Stencils

    Full text link
    The recently developed Flexible Local Approximation MEthod (FLAME) produces accurate difference schemes by replacing the usual Taylor expansion with Trefftz functions -- local solutions of the underlying differential equation. This paper advances and casts in a general form a significant modification of FLAME proposed recently by Pinheiro & Webb: a least-squares fit instead of the exact match of the approximate solution at the stencil nodes. As a consequence of that, FLAME schemes can now be generated on irregular stencils with the number of nodes substantially greater than the number of approximating functions. The accuracy of the method is preserved but its robustness is improved. For demonstration, the paper presents a number of numerical examples in 2D and 3D: electrostatic (magnetostatic) particle interactions, scattering of electromagnetic (acoustic) waves, and wave propagation in a photonic crystal. The examples explore the role of the grid and stencil size, of the number of approximating functions, and of the irregularity of the stencils.Comment: 28 pages, 12 figures; to be published in J Comp Phy

    Theoretical Interpretations and Applications of Radial Basis Function Networks

    Get PDF
    Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains

    Renormalization procedure for random tensor networks and the canonical tensor model

    Get PDF
    We discuss a renormalization procedure for random tensor networks, and show that the corresponding renormalization-group flow is given by the Hamiltonian vector flow of the canonical tensor model, which is a discretized model of quantum gravity. The result is the generalization of the previous one concerning the relation between the Ising model on random networks and the canonical tensor model with N=2. We also prove a general theorem which relates discontinuity of the renormalization-group flow and the phase transitions of random tensor networks.Comment: 23 pages, 5 figures; Comments on first order transitions and discontinuity of RG added, and minor correction

    Motion in Quantum Gravity

    Full text link
    We tackle the question of motion in Quantum Gravity: what does motion mean at the Planck scale? Although we are still far from a complete answer we consider here a toy model in which the problem can be formulated and resolved precisely. The setting of the toy model is three dimensional Euclidean gravity. Before studying the model in detail, we argue that Loop Quantum Gravity may provide a very useful approach when discussing the question of motion in Quantum Gravity.Comment: 30 pages, to appear in the book "Mass and Motion in General Relativity", proceedings of the C.N.R.S. School in Orleans, France, eds. L. Blanchet, A. Spallicci and B. Whitin
    corecore