242,735 research outputs found
Realtime market microstructure analysis: online Transaction Cost Analysis
Motivated by the practical challenge in monitoring the performance of a large
number of algorithmic trading orders, this paper provides a methodology that
leads to automatic discovery of the causes that lie behind a poor trading
performance. It also gives theoretical foundations to a generic framework for
real-time trading analysis. Academic literature provides different ways to
formalize these algorithms and show how optimal they can be from a
mean-variance, a stochastic control, an impulse control or a statistical
learning viewpoint. This paper is agnostic about the way the algorithm has been
built and provides a theoretical formalism to identify in real-time the market
conditions that influenced its efficiency or inefficiency. For a given set of
characteristics describing the market context, selected by a practitioner, we
first show how a set of additional derived explanatory factors, called anomaly
detectors, can be created for each market order. We then will present an online
methodology to quantify how this extended set of factors, at any given time,
predicts which of the orders are underperforming while calculating the
predictive power of this explanatory factor set. Armed with this information,
which we call influence analysis, we intend to empower the order monitoring
user to take appropriate action on any affected orders by re-calibrating the
trading algorithms working the order through new parameters, pausing their
execution or taking over more direct trading control. Also we intend that use
of this method in the post trade analysis of algorithms can be taken advantage
of to automatically adjust their trading action.Comment: 33 pages, 12 figure
Towards Machine Wald
The past century has seen a steady increase in the need of estimating and
predicting complex systems and making (possibly critical) decisions with
limited information. Although computers have made possible the numerical
evaluation of sophisticated statistical models, these models are still designed
\emph{by humans} because there is currently no known recipe or algorithm for
dividing the design of a statistical model into a sequence of arithmetic
operations. Indeed enabling computers to \emph{think} as \emph{humans} have the
ability to do when faced with uncertainty is challenging in several major ways:
(1) Finding optimal statistical models remains to be formulated as a well posed
problem when information on the system of interest is incomplete and comes in
the form of a complex combination of sample data, partial knowledge of
constitutive relations and a limited description of the distribution of input
random variables. (2) The space of admissible scenarios along with the space of
relevant information, assumptions, and/or beliefs, tend to be infinite
dimensional, whereas calculus on a computer is necessarily discrete and finite.
With this purpose, this paper explores the foundations of a rigorous framework
for the scientific computation of optimal statistical estimators/models and
reviews their connections with Decision Theory, Machine Learning, Bayesian
Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty
Quantification and Information Based Complexity.Comment: 37 page
- …