38 research outputs found

    Statistical Learning and Inverse Problems: An Stochastic Gradient Approach

    Full text link
    Inverse problems are paramount in Science and Engineering. In this paper, we consider the setup of Statistical Inverse Problem (SIP) and demonstrate how Stochastic Gradient Descent (SGD) algorithms can be used in the linear SIP setting. We provide consistency and finite sample bounds for the excess risk. We also propose a modification for the SGD algorithm where we leverage machine learning methods to smooth the stochastic gradients and improve empirical performance. We exemplify the algorithm in a setting of great interest nowadays: the Functional Linear Regression model. In this case we consider a synthetic data example and examples with a real data classification problem

    Optimal Trading in Automatic Market Makers with Deep Learning

    Full text link
    This article explores the optimisation of trading strategies in Constant Function Market Makers (CFMMs) and centralised exchanges. We develop a model that accounts for the interaction between these two markets, estimating the conditional dependence between variables using the concept of conditional elicitability. Furthermore, we pose an optimal execution problem where the agent hides their orders by controlling the rate at which they trade. We do so without approximating the market dynamics. The resulting dynamic programming equation is not analytically tractable, therefore, we employ the deep Galerkin method to solve it. Finally, we conduct numerical experiments and illustrate that the optimal strategy is not prone to price slippage and outperforms na\"ive strategies

    Avoiding zero probability events when computing Value at Risk contributions

    Full text link
    This paper is concerned with the process of risk allocation for a generic multivariate model when the risk measure is chosen as the Value-at-Risk (VaR). We recast the traditional Euler contributions from an expectation conditional on an event of zero probability to a ratio involving conditional expectations whose conditioning events have stricktly positive probability. We derive an analytical form of the proposed representation of VaR contributions for various parametric models. Our numerical experiments show that the estimator using this novel representation outperforms the standard Monte Carlo estimator in terms of bias and variance. Moreover, unlike the existing estimators, the proposed estimator is free from hyperparameters
    corecore