32 research outputs found

    Toxic arbitrage

    Get PDF
    Short-lived arbitrage opportunities arise when prices adjust with a lag to new information. They are toxic because they expose dealers to the risk of trading at stale quotes. Hence, theory implies that more frequent toxic arbitrage opportunities and faster responses to these opportunities should impair liquidity. We provide supporting evidence using data on triangular arbitrage. As predicted, illiquidity is higher on days when the fraction of toxic arbitrage opportunities and arbitrageurs’ relative speed are higher. Overall, our findings suggest that the price efficiency gain of high-frequency arbitrage comes at the cost of increased adverse selection risk

    Sunshine Trading: Flashes of Trading Intent at the NASDAQ

    Get PDF
    We use the introduction and the subsequent removal of the flash order facility (an actionable indication of interest, IOI) from the NASDAQ as a natural experiment to investigate the impact of voluntary disclosure of trading intent on market quality. We find that flash orders significantly improve liquidity in the NASDAQ. In addition, overall market quality improves substantially when the flash functionality is introduced and deteriorates when it is removed. One explanation for our findings is that flash orders are placed by less informed traders and fulfill their role as an advertisement of uninformed liquidity needs. They successfully attract responses from liquidity providers immediately after the announcement is placed, thus lowering the risk-bearing cost for the overall market. Our study is important in understanding the impact of voluntary disclosure, in guiding future market design choices, and in the current debate on dark pools and IOIs

    Aggregate Stock Market Illiquidity and Bond Risk Premia

    Get PDF
    We assess the effect of aggregate stock market illiquidity on U.S. Treasury bond risk premia. We find that the stock market illiquidity variable adds to the well established Cochrane-Piazzesi and Ludvigson-Ng factors. It explains 10%, 9%, 7%, and 7% of the one-year-ahead variation in the excess return for two-, three-, four-, and ve-year bonds respectively and increases the adjusted R2 by 3-6% across all maturities over Cochrane and Piazzesi (2005) and Ludvigson and Ng (2009) factors. The effects are highly statistically and economically significant both in and out of sample. We find that our result is robust to and is not driven by information from open interest in the futures market, long-run inflation expectations, dispersion in beliefs, and funding liquidity. We argue that stock market illiquidity is a timely variable that is related to " right-to-quality" episodes and might contain information about expected future business conditions through funding liquidity and investment channels

    Identifying Cross-Sided Liquidity Externalities

    Get PDF
    __Abstract__ We study the relevance of the cross-sided externality between liquidity makers and takers from the two-sided market perspective. We use exogenous changes in the make/take fee structure, minimum tick-size and technological shocks for liquidity takers and makers, as experiments to identify cross-sided complementarities between liquidity makers and takers in the U.S. equity market. We find that the externality is on average positive, but it decreases with adverse selection. We quantify the economic significance of the externality by evaluating an exchange's revenue after a make/take fee change

    Economic Valuation of Liquidity Timing

    Get PDF
    __Abstract__ This paper conducts a horse-race of different liquidity proxies using dynamic asset allocation strategies to evaluate the short-horizon predictive ability of liquidity on monthly stock returns. We assess the economic value of the out-of-sample power of empirical models based on different liquidity measures and find three key results: liquidity timing leads to tangible economic gains; a risk-averse investor will pay a high performance fee to switch from a dynamic portfolio strategy based on various liquidity measures to one that conditions on the Zeros measure (Lesmond, Ogden, and Trzcinka, 1999); the Zeros measure outperforms other liquidity measures because of its robustness in extreme market conditions. These findings are stable over time and robust to controlling for existing market return predictors or considering risk-adjusted returns

    Leveraging Artificial Intelligence Technology for Mapping Research to Sustainable Development Goals: A Case Study

    Full text link
    The number of publications related to the Sustainable Development Goals (SDGs) continues to grow. These publications cover a diverse spectrum of research, from humanities and social sciences to engineering and health. Given the imperative of funding bodies to monitor outcomes and impacts, linking publications to relevant SDGs is critical but remains time-consuming and difficult given the breadth and complexity of the SDGs. A publication may relate to several goals (interconnection feature of goals), and therefore require multidisciplinary knowledge to tag accurately. Machine learning approaches are promising and have proven particularly valuable for tasks such as manual data labeling and text classification. In this study, we employed over 82,000 publications from an Australian university as a case study. We utilized a similarity measure to map these publications onto Sustainable Development Goals (SDGs). Additionally, we leveraged the OpenAI GPT model to conduct the same task, facilitating a comparative analysis between the two approaches. Experimental results show that about 82.89% of the results obtained by the similarity measure overlap (at least one tag) with the outputs of the GPT model. The adopted model (similarity measure) can complement GPT model for SDG classification. Furthermore, deep learning methods, which include the similarity measure used here, are more accessible and trusted for dealing with sensitive data without the use of commercial AI services or the deployment of expensive computing resources to operate large language models. Our study demonstrates how a crafted combination of the two methods can achieve reliable results for mapping research to the SDGs

    Trading on Algos

    Get PDF
    Abstract This paper studies the impact of algorithmic trading (AT) on asset prices. We find that the heterogeneity of algorithmic traders across stocks generates predictable patterns in stock returns. A trading strategy that exploits the AT return predictability generates a monthly risk-adjusted performance between 50-130 basis points for the period 1999 to 2012. We find that stocks with lower AT have higher returns, after controlling for standard market-, size-, book-to-market-, momentum, and liquidity risk factors. This effect survives the inclusion of many cross-sectional return predictors and is statistically and economically significant. Return predictability is stronger among stocks with higher impediments to trade and higher predatory/opportunistic algorithmic traders. Our paper is the first to study and establish a strong link between algorithmic trading and asset prices

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants
    corecore