13,368 research outputs found

    Optimal execution strategy with an uncertain volume target

    Get PDF
    In the seminal paper on optimal execution of portfolio transactions, Almgren and Chriss (2001) define the optimal trading strategy to liquidate a fixed volume of a single security under price uncertainty. Yet there exist situations, such as in the power market, in which the volume to be traded can only be estimated and becomes more accurate when approaching a specified delivery time. During the course of execution, a trader should then constantly adapt their trading strategy to meet their fluctuating volume target. In this paper, we develop a model that accounts for volume uncertainty and we show that a risk-averse trader has benefit in delaying their trades. More precisely, we argue that the optimal strategy is a trade-off between early and late trades in order to balance risk associated with both price and volume. By incorporating a risk term related to the volume to trade, the static optimal strategies suggested by our model avoid the explosion in the algorithmic complexity usually associated with dynamic programming solutions, all the while yielding competitive performance

    How efficiency shapes market impact

    Full text link
    We develop a theory for the market impact of large trading orders, which we call metaorders because they are typically split into small pieces and executed incrementally. Market impact is empirically observed to be a concave function of metaorder size, i.e., the impact per share of large metaorders is smaller than that of small metaorders. We formulate a stylized model of an algorithmic execution service and derive a fair pricing condition, which says that the average transaction price of the metaorder is equal to the price after trading is completed. We show that at equilibrium the distribution of trading volume adjusts to reflect information, and dictates the shape of the impact function. The resulting theory makes empirically testable predictions for the functional form of both the temporary and permanent components of market impact. Based on the commonly observed asymptotic distribution for the volume of large trades, it says that market impact should increase asymptotically roughly as the square root of metaorder size, with average permanent impact relaxing to about two thirds of peak impact.Comment: 34 pages, 3 figure

    Dynamic modeling of mean-reverting spreads for statistical arbitrage

    Full text link
    Statistical arbitrage strategies, such as pairs trading and its generalizations, rely on the construction of mean-reverting spreads enjoying a certain degree of predictability. Gaussian linear state-space processes have recently been proposed as a model for such spreads under the assumption that the observed process is a noisy realization of some hidden states. Real-time estimation of the unobserved spread process can reveal temporary market inefficiencies which can then be exploited to generate excess returns. Building on previous work, we embrace the state-space framework for modeling spread processes and extend this methodology along three different directions. First, we introduce time-dependency in the model parameters, which allows for quick adaptation to changes in the data generating process. Second, we provide an on-line estimation algorithm that can be constantly run in real-time. Being computationally fast, the algorithm is particularly suitable for building aggressive trading strategies based on high-frequency data and may be used as a monitoring device for mean-reversion. Finally, our framework naturally provides informative uncertainty measures of all the estimated parameters. Experimental results based on Monte Carlo simulations and historical equity data are discussed, including a co-integration relationship involving two exchange-traded funds.Comment: 34 pages, 6 figures. Submitte

    Model Selection and Adaptive Markov chain Monte Carlo for Bayesian Cointegrated VAR model

    Full text link
    This paper develops a matrix-variate adaptive Markov chain Monte Carlo (MCMC) methodology for Bayesian Cointegrated Vector Auto Regressions (CVAR). We replace the popular approach to sampling Bayesian CVAR models, involving griddy Gibbs, with an automated efficient alternative, based on the Adaptive Metropolis algorithm of Roberts and Rosenthal, (2009). Developing the adaptive MCMC framework for Bayesian CVAR models allows for efficient estimation of posterior parameters in significantly higher dimensional CVAR series than previously possible with existing griddy Gibbs samplers. For a n-dimensional CVAR series, the matrix-variate posterior is in dimension 3n2+n3n^2 + n, with significant correlation present between the blocks of matrix random variables. We also treat the rank of the CVAR model as a random variable and perform joint inference on the rank and model parameters. This is achieved with a Bayesian posterior distribution defined over both the rank and the CVAR model parameters, and inference is made via Bayes Factor analysis of rank. Practically the adaptive sampler also aids in the development of automated Bayesian cointegration models for algorithmic trading systems considering instruments made up of several assets, such as currency baskets. Previously the literature on financial applications of CVAR trading models typically only considers pairs trading (n=2) due to the computational cost of the griddy Gibbs. We are able to extend under our adaptive framework to n>>2n >> 2 and demonstrate an example with n = 10, resulting in a posterior distribution with parameters up to dimension 310. By also considering the rank as a random quantity we can ensure our resulting trading models are able to adjust to potentially time varying market conditions in a coherent statistical framework.Comment: to appear journal Bayesian Analysi

    Impersonal efficiency and the dangers of a fully automated securities exchange

    Get PDF
    This report identifies impersonal efficiency as a driver of market automation during the past four decades, and speculates about the future problems it might pose. The ideology of impersonal efficiency is rooted in a mistrust of financial intermediaries such as floor brokers and specialists. Impersonal efficiency has guided the development of market automation towards transparency and impersonality, at the expense of human trading floors. The result has been an erosion of the informal norms and human judgment that characterize less anonymous markets. We call impersonal efficiency an ideology because we do not think that impersonal markets are always superior to markets built on social ties. This report traces the historical origins of this ideology, considers the problems it has already created in the recent Flash Crash of 2010, and asks what potential risks it might pose in the future

    Pareto Optimal Allocation under Uncertain Preferences

    Get PDF
    The assignment problem is one of the most well-studied settings in social choice, matching, and discrete allocation. We consider the problem with the additional feature that agents' preferences involve uncertainty. The setting with uncertainty leads to a number of interesting questions including the following ones. How to compute an assignment with the highest probability of being Pareto optimal? What is the complexity of computing the probability that a given assignment is Pareto optimal? Does there exist an assignment that is Pareto optimal with probability one? We consider these problems under two natural uncertainty models: (1) the lottery model in which each agent has an independent probability distribution over linear orders and (2) the joint probability model that involves a joint probability distribution over preference profiles. For both of the models, we present a number of algorithmic and complexity results.Comment: Preliminary Draft; new results & new author

    Informational Substitutes

    Full text link
    We propose definitions of substitutes and complements for pieces of information ("signals") in the context of a decision or optimization problem, with game-theoretic and algorithmic applications. In a game-theoretic context, substitutes capture diminishing marginal value of information to a rational decision maker. We use the definitions to address the question of how and when information is aggregated in prediction markets. Substitutes characterize "best-possible" equilibria with immediate information aggregation, while complements characterize "worst-possible", delayed aggregation. Game-theoretic applications also include settings such as crowdsourcing contests and Q\&A forums. In an algorithmic context, where substitutes capture diminishing marginal improvement of information to an optimization problem, substitutes imply efficient approximation algorithms for a very general class of (adaptive) information acquisition problems. In tandem with these broad applications, we examine the structure and design of informational substitutes and complements. They have equivalent, intuitive definitions from disparate perspectives: submodularity, geometry, and information theory. We also consider the design of scoring rules or optimization problems so as to encourage substitutability or complementarity, with positive and negative results. Taken as a whole, the results give some evidence that, in parallel with substitutable items, informational substitutes play a natural conceptual and formal role in game theory and algorithms.Comment: Full version of FOCS 2016 paper. Single-column, 61 pages (48 main text, 13 references and appendix
    corecore