326 research outputs found

    Optimal trading algorithms and selfsimilar processes: a p-variation approach

    Get PDF
    Almgren and Chriss ("Optimal execution of portfolio transactions", Journal of Risk, Vol. 3, No. 2, 2010, pp. 5-39) and Lehalle ("Rigorous strategic trading: balanced portfolio and mean reversion", Journal of Trading, Summer 2009, pp. 40-46.) developed optimal trading algorithms for assets and portfolios driven by a brownian motion. More recently, Gatheral and Schied ("Optimal trade execution under geometric brownian motion in the Almgren and Chriss framework", Working paper SSRN, August 2010) addressed the same problem for the geometric brownian motion. In this article we extend these ideas for assets and portfolios driven by a discrete version of a selfsimilar process of exponent H in (0,1), which can be either a fractional brownian motion of Hurst exponent H or a truncated Lévy distribution of index 1/H. The cost functional we use is not the classical expectation-variance one: instead of the variance, we use the p-variation, i.e. the Lp equivalent of the variance. We find explicitly the trading algorithm for any p>1 and compare the resulting trading curve (that we call p-curve) with the classical expectation-variance curve (the 2-curve). If p2 then the p-curve is above the 2-curve at the beginning of the execution and below at the end. Therefore, this pattern minimizes the market impact. We also show that the value of p in the p-variation is related to the exponent H of selfsimilarity via p=1/H. In consequence, one can find the right value of p to put into the trading algorithm by calibrating the exponent H via real time series. We believe this result is interesting applications for high-frecuency trading.

    Optimal posting price of limit orders: learning by trading

    Get PDF
    Considering that a trader or a trading algorithm interacting with markets during continuous auctions can be modeled by an iterating procedure adjusting the price at which he posts orders at a given rhythm, this paper proposes a procedure minimizing his costs. We prove the a.s. convergence of the algorithm under assumptions on the cost function and give some practical criteria on model parameters to ensure that the conditions to use the algorithm are fulfilled (using notably the co-monotony principle). We illustrate our results with numerical experiments on both simulated data and using a financial market dataset

    Optimal algorithmic trading and market microstructure

    Get PDF
    The efficient frontier is a core concept in Modern Portfolio Theory. Based on this idea, we will construct optimal trading curves for different types of portfolios. These curves correspond to the algorithmic trading strategies that minimize the expected transaction costs, i.e. the joint effect of market impact and market risk. We will study five portfolio trading strategies. For the first three (single-asset, general multi-asseet and balanced portfolios) we will assume that the underlyings follow a Gaussian diffusion, whereas for the last two portfolios we will suppose that there exists a combination of assets such that the corresponding portfolio follows a mean-reverting dynamics. The optimal trading curves can be computed by solving an N-dimensional optimization problem, where N is the (pre-determined) number of trading times. We will solve the recursive algorithm using the "shooting method", a numerical technique for differential equations. This method has the advantage that its corresponding equation is always one-dimensional regardless of the number of trading times N. This novel approach could be appealing for high-frequency traders and electronic brokers.quantitative finance; optimal trading; algorithmic trading; systematic trading; market microstructure

    Optimal split of orders across liquidity pools: a stochastic algorithm approach

    Get PDF
    Evolutions of the trading landscape lead to the capability to exchange the same financial instrument on different venues. Because of liquidity issues, the trading firms split large orders across several trading destinations to optimize their execution. To solve this problem we devised two stochastic recursive learning procedures which adjust the proportions of the order to be sent to the different venues, one based on an optimization principle, the other on some reinforcement ideas. Both procedures are investigated from a theoretical point of view: we prove a.s. convergence of the optimization algorithm under some light ergodic (or "averaging") assumption on the input data process. No Markov property is needed. When the inputs are i.i.d. we show that the convergence rate is ruled by a Central Limit Theorem. Finally, the mutual performances of both algorithms are compared on simulated and real data with respect to an "oracle" strategy devised by an "insider" who knows a priori the executed quantities by every venues

    Simulating and analyzing order book data: The queue-reactive model

    Full text link
    Through the analysis of a dataset of ultra high frequency order book updates, we introduce a model which accommodates the empirical properties of the full order book together with the stylized facts of lower frequency financial data. To do so, we split the time interval of interest into periods in which a well chosen reference price, typically the mid price, remains constant. Within these periods, we view the limit order book as a Markov queuing system. Indeed, we assume that the intensities of the order flows only depend on the current state of the order book. We establish the limiting behavior of this model and estimate its parameters from market data. Then, in order to design a relevant model for the whole period of interest, we use a stochastic mechanism that allows for switches from one period of constant reference price to another. Beyond enabling to reproduce accurately the behavior of market data, we show that our framework can be very useful for practitioners, notably as a market simulator or as a tool for the transaction cost analysis of complex trading algorithms

    Les droits des détenus et leur contrôle : enjeux actuels de la situation canadienne

    Get PDF
    Cet article propose un bref aperçu de l’évolution des droits des personnes détenues au Canada. En dressant le bilan actuel au regard des écrits de Pierre Landreville, nous mettons tout d’abord l’accent sur le cheminement normatif parcouru ainsi que sur les limites rencontrées en termes de ressources nécessaires à la pleine concrétisation des droits des détenus. La seconde partie de cet article analyse les défis contemporains que pose le contrôle du respect des droits des détenus. En se basant sur une recherche empirique récente, nous confrontons l’état d’avancement et l’impact de deux mécanismes de contrôle réclamés dès 1973 par Pierre Landreville : l’Enquêteur correctionnel et le Comité de prévention de la torture des Nations Unies. Nous concluons par une discussion sur l’impact des mécanismes de contrôle et leur utilisation par l’État en tant qu’outil de légitimation.This article uses Pierre Landreville’s writings as the starting point of a brief historical analysis of the evolution of inmate’s rights in Canada. The first section of the article focuses on the normative development of prisoner’s rights and the limits it has encountered, particularly with regards to the necessary resources to their full realisation. The second section deals with the ongoing debate concerning the respect of inmates’ rights and the oversight mechanisms created to this effect. Using a recent empirical research, we confront the actual state and impact of two oversight mechanisms that Landreville advocated for as early as 1973: the Correctional Investigator and the United Nations sub-committee on prevention of torture. We conclude by discussing the impact of the oversight mechanisms and their legitimating effects

    Optimal split of orders across liquidity pools: a stochastic algorithm approach

    Get PDF
    Evolutions of the trading landscape lead to the capability to exchange the same financial instrument on different venues. Because of liquidity issues, the trading firms split large orders across several trading destinations to optimize their execution. To solve this problem we devised two stochastic recursive learning procedures which adjust the proportions of the order to be sent to the different venues, one based on an optimization principle, the other on some reinforcement ideas. Both procedures are investigated from a theoretical point of view: we prove a.s. convergence of the optimization algorithm under some light ergodic (or "averaging") assumption on the input data process. No Markov property is needed. When the inputs are i.i.d. we show that the convergence rate is ruled by a Central Limit Theorem. Finally, the mutual performances of both algorithms are compared on simulated and real data with respect to an "oracle" strategy devised by an "insider" who knows a priori the executed quantities by every venues.Asset allocation, Stochastic Lagrangian algorithm, reinforcement principle, monotone dynamic system

    Market impacts and the life cycle of investors orders

    Full text link
    In this paper, we use a database of around 400,000 metaorders issued by investors and electronically traded on European markets in 2010 in order to study market impact at different scales. At the intraday scale we confirm a square root temporary impact in the daily participation, and we shed light on a duration factor in 1/Tγ1/T^{\gamma} with γ≃0.25\gamma \simeq 0.25. Including this factor in the fits reinforces the square root shape of impact. We observe a power-law for the transient impact with an exponent between 0.50.5 (for long metaorders) and 0.80.8 (for shorter ones). Moreover we show that the market does not anticipate the size of the meta-orders. The intraday decay seems to exhibit two regimes (though hard to identify precisely): a "slow" regime right after the execution of the meta-order followed by a faster one. At the daily time scale, we show price moves after a metaorder can be split between realizations of expected returns that have triggered the investing decision and an idiosynchratic impact that slowly decays to zero. Moreover we propose a class of toy models based on Hawkes processes (the Hawkes Impact Models, HIM) to illustrate our reasoning. We show how the Impulsive-HIM model, despite its simplicity, embeds appealing features like transience and decay of impact. The latter is parametrized by a parameter CC having a macroscopic interpretation: the ratio of contrarian reaction (i.e. impact decay) and of the "herding" reaction (i.e. impact amplification).Comment: 30 pages, 12 figure
    • …
    corecore