312 research outputs found
BSL: An R Package for Efficient Parameter Estimation for Simulation-Based Models via Bayesian Synthetic Likelihood
Bayesian synthetic likelihood (BSL; Price, Drovandi, Lee, and Nott 2018) is a popular method for estimating the parameter posterior distribution for complex statistical models and stochastic processes that possess a computationally intractable likelihood function. Instead of evaluating the likelihood, BSL approximates the likelihood of a judiciously chosen summary statistic of the data via model simulation and density estimation. Compared to alternative methods such as approximate Bayesian computation (ABC), BSL requires little tuning and requires less model simulations than ABC when the chosen summary statistic is high-dimensional. The original synthetic likelihood relies on a multivariate normal approximation of the intractable likelihood, where the mean and covariance are estimated by simulation. An extension of BSL considers replacing the sample covariance with a penalized covariance estimator to reduce the number of required model simulations. Further, a semi-parametric approach has been developed to relax the normality assumption. Finally, another extension of BSL aims to develop a more robust synthetic likelihood estimator while acknowledging there might be model misspecification. In this paper, we present the R package BSL that amalgamates the aforementioned methods and more into a single, easy-to-use and coherent piece of software. The package also includes several examples to illustrate use of the package and the utility of the methods
Stochastic timeseries analysis in electric power systems and paleo-climate data
In this thesis a data science study of elementary stochastic processes is laid, aided with the development of two numerical software programmes, applied to power-grid frequency studies and Dansgaard--Oeschger events in paleo-climate data.
Power-grid frequency is a key measure in power grid studies.
It comprises the balance of power in a power grid at any instance.
In this thesis an elementary Markovian Langevin-like stochastic process is employed, extending from existent literature, to show the basic elements of power-grid frequency dynamics can be modelled in such manner.
Through a data science study of power-grid frequency data, it is shown that fluctuations scale in an inverse square-root relation with their size, alike any other stochastic process, confirming previous theoretical results.
A simple Ornstein--Uhlenbeck is offered as a surrogate model for power-grid frequency dynamics, with a versatile input of driving deterministic functions, showing not surprisingly that driven stochastic processes with Gaussian noise do not necessarily show a Gaussian distribution.
A study of the correlations between recordings of power-grid frequency in the same power-grid system reveals they are correlated, but a theoretical understanding is yet to be developed.
A super-diffusive relaxation of amplitude synchronisation is shown to exist in space in coupled power-grid systems, whereas a linear relation is evidenced for the emergence of phase synchronisation.
Two Python software packages are designed, offering the possibility to extract conditional moments for Markovian stochastic processes of any dimension, with a particular application for Markovian jump-diffusion processes for one-dimensional timeseries.
Lastly, a study of Dansgaard--Oeschger events in recordings of paleoclimate data under the purview of bivariate Markovian jump-diffusion processes is proposed, augmented by a semi-theoretical study of bivariate stochastic processes, offering an explanation for the discontinuous transitions in these events and showing the existence of deterministic couplings between the recordings of the dust concentration and a proxy for the atmospheric temperature
Recommended from our members
Innovative derivative pricing and time series simulation techniques via machine and deep learning
There is a growing number of applications of machine learning and deep learning in quantitative and computational finance. In this thesis, we focus on two of them.
In the first application, we employ machine learning and deep learning in derivative pricing. The models considering jumps or stochastic volatility are more complicated than the Black-Merton-Scholes model and the derivatives under these models are harder to be priced. The traditional pricing methods are computationally intensive, so machine learning and deep learning are employed for fast pricing. I
n Chapter 2, we propose a method for pricing American options under the variance gamma model. We develop a new fast and accurate approximation method inspired by the quadratic approximation to get rid of the time steps required in finite difference and simulation methods, while reducing the error by making use of a machine learning technique on pre-calculated quantities. We compare the performance of our method with those of the existing methods and show that this method is efficient and accurate for practical use. In Chapters 3 and 4, we propose unsupervised deep learning methods for option pricing under Lévy process and stochastic volatility respectively, with a special focus on barrier options in Chapter 4.
The unsupervised deep learning approach employs a neural network as the candidate option surface and trains the neural network to satisfy certain equations. By matching the equation and the boundary conditions, the neural network would yield an accurate solution. Special structures called singular terms are added to the neural networks to deal with the non-smooth and discontinuous payoff at the strike and barrier levels so that the neural networks can replicate the asymptotic behaviors of options at short maturities. Unlike supervised learning, this approach does not require any labels. Once trained, the neural network solution yields fast and accurate option values.
The second application focuses on financial time series simulation utilizing deep learning techniques. Simulation extends the limited real data for training and evaluation of trading strategies. It is challenging because of the complex statistical properties of the real financial data. In Chapter 5, we introduce two generative adversarial networks, which utilize the convolutional networks with attention and the transformers, for financial time series simulation. The networks learn the statistical properties in a data-driven manner and the attention mechanism helps to replicate the long-range dependencies. The proposed models are tested on the S&P 500 index and its option data, examined by scores based on the stylized facts and are compared with the pure convolutional network, i.e. QuantGAN. The attention-based networks not only reproduce the stylized facts, including heavy tails, autocorrelation and cross-correlation, but also smooth the autocorrelation of returns
The 1st International Conference on Computational Engineering and Intelligent Systems
Computational engineering, artificial intelligence and smart systems constitute a hot multidisciplinary topic contrasting computer science, engineering and applied mathematics that created a variety of fascinating intelligent systems. Computational engineering encloses fundamental engineering and science blended with the advanced knowledge of mathematics, algorithms and computer languages. It is concerned with the modeling and simulation of complex systems and data processing methods. Computing and artificial intelligence lead to smart systems that are advanced machines designed to fulfill certain specifications. This proceedings book is a collection of papers presented at the first International Conference on Computational Engineering and Intelligent Systems (ICCEIS2021), held online in the period December 10-12, 2021. The collection offers a wide scope of engineering topics, including smart grids, intelligent control, artificial intelligence, optimization, microelectronics and telecommunication systems. The contributions included in this book are of high quality, present details concerning the topics in a succinct way, and can be used as excellent reference and support for readers regarding the field of computational engineering, artificial intelligence and smart system
Recommended from our members
Single atom imaging with time-resolved electron microscopy
Developments in scanning transmission electron microscopy (STEM) have opened
up new possibilities for time-resolved imaging at the atomic scale. However, rapid
imaging of single atom dynamics brings with it a new set of challenges, particularly
regarding noise and the interaction between the electron beam and the specimen. This
thesis develops a set of analytical tools for capturing atomic motion and analyzing the
dynamic behaviour of materials at the atomic scale.
Machine learning is increasingly playing an important role in the analysis of electron
microscopy data. In this light, new unsupervised learning tools are developed here for
noise removal under low-dose imaging conditions and for identifying the motion of
surface atoms. The scope for real-time processing and analysis is also explored, which is
of rising importance as electron microscopy datasets grow in size and complexity.
These advances in image processing and analysis are combined with computational
modelling to uncover new chemical and physical insights into the motion of atoms
adsorbed onto surfaces. Of particular interest are systems for heterogeneous catalysis,
where the catalytic activity can depend intimately on the atomic environment. The
study of Cu atoms on a graphene oxide support reveals that the atoms undergo
anomalous diffusion as a result of spatial and energetic disorder present in the substrate.
The investigation is extended to examine the structure and stability of small Cu clusters
on graphene oxide, with atomistic modelling used to understand the significant role
played by the substrate. Finally, the analytical methods are used to study the surface
reconstruction of silicon alongside the electron beam-induced motion of adatoms on
the surface.
Taken together, these studies demonstrate the materials insights that can be obtained
with time-resolved STEM imaging, and highlight the importance of combining state-ofthe-
art imaging with computational analysis and atomistic modelling to quantitatively
characterize the behaviour of materials with atomic resolution.The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007–2013)/ERC grant agreement 291522–3DIMAGE, as well as from the European Union Seventh Framework Programme under Grant Agreement 312483-ESTEEM2 (Integrated Infrastructure Initiative -I3)
Hierarchical adaptive sparse grids and quasi Monte Carlo for option pricing under the rough Bergomi model
The rough Bergomi (rBergomi) model, introduced recently in [4], is a promising rough volatility model in quantitative finance. It is a parsimonious model depending on only three parameters, and yet exhibits remarkable fit to empirical implied volatility surfaces. In the absence of analytical European option pricing methods for the model, and due to the non-Markovian nature of the fractional driver, the prevalent option is to use the Monte Carlo (MC) simulation for pricing. Despite recent advances in the MC method in this context, pricing under the rBergomi model is still a timeconsuming task. To overcome this issue, we design a novel, hierarchical approach, based on i) adaptive sparse grids quadrature (ASGQ), and ii) quasi Monte Carlo (QMC). Both techniques are coupled with Brownian bridge construction and Richardson extrapolation. By uncovering the available regularity, our hierarchical methods demonstrate substantial computational gains with respect to the standard MC method, when reaching a sufficiently small relative error tolerance in the price estimates across different parameter constellations, even for very small values of the Hurst parameter. Our work opens a new research direction in this field, i.e., to investigate the performance of methods other than Monte Carlo for pricing and calibrating under the rBergomi model
- …