472 research outputs found
Rational bidding using reinforcement learning: an application in automated resource allocation
The application of autonomous agents by the provisioning and usage of computational resources is an attractive research field. Various methods and technologies in the area of artificial intelligence, statistics and economics are playing together to achieve i) autonomic resource provisioning and usage of computational resources, to invent ii) competitive bidding strategies for widely used market mechanisms and to iii) incentivize consumers and providers to use such market-based systems.
The contributions of the paper are threefold. First, we present a framework for supporting consumers and providers in technical and economic preference elicitation and the generation of bids. Secondly, we introduce a consumer-side reinforcement learning bidding strategy which enables rational behavior by the generation and selection of bids. Thirdly, we evaluate and compare this bidding strategy against a truth-telling bidding strategy for two kinds of market mechanisms – one centralized and one decentralized
Q-Strategy: A Bidding Strategy for Market-Based Allocation of Grid Services
The application of autonomous agents by the provisioning and usage of computational services is an attractive research field. Various methods and technologies in the area of artificial intelligence, statistics and economics are playing together to achieve i) autonomic service provisioning and usage of Grid services, to invent ii) competitive bidding strategies for widely used market mechanisms and to iii) incentivize consumers and providers to use such market-based systems.
The contributions of the paper are threefold. First, we present a bidding agent framework for implementing artificial bidding agents, supporting consumers and providers in technical and economic preference elicitation as well as automated bid generation by the requesting and provisioning of Grid services. Secondly, we introduce a novel consumer-side bidding strategy, which enables a goal-oriented and strategic behavior by the generation and submission of consumer service requests and selection of provider offers. Thirdly, we evaluate and compare the Q-strategy, implemented within the presented framework, against the Truth-Telling bidding strategy in three mechanisms – a centralized CDA, a decentralized on-line machine scheduling and a FIFO-scheduling mechanisms
The Strategic Exploitation of Limited Information and Opportunity in Networked Markets
This paper studies the effect of constraining interactions within a market. A model is analysed in which boundedly rational agents trade with and gather information from their neighbours within a trade network. It is demonstrated that a trader’s ability to profit and to identify the equilibrium price is positively correlated with its degree of connectivity within the market. Where traders differ in their number of potential trading partners, well-connected traders are found to benefit from aggressive trading behaviour.Where information propagation is constrained by the topology of the trade network, connectedness affects the nature of the strategies employed
Strategies used as spectroscopy of financial markets reveal new stylized facts
We propose a new set of stylized facts quantifying the structure of financial
markets. The key idea is to study the combined structure of both investment
strategies and prices in order to open a qualitatively new level of
understanding of financial and economic markets. We study the detailed order
flow on the Shenzhen Stock Exchange of China for the whole year of 2003. This
enormous dataset allows us to compare (i) a closed national market (A-shares)
with an international market (B-shares), (ii) individuals and institutions and
(iii) real investors to random strategies with respect to timing that share
otherwise all other characteristics. We find that more trading results in
smaller net return due to trading frictions. We unveiled quantitative power
laws with non-trivial exponents, that quantify the deterioration of performance
with frequency and with holding period of the strategies used by investors.
Random strategies are found to perform much better than real ones, both for
winners and losers. Surprising large arbitrage opportunities exist, especially
when using zero-intelligence strategies. This is a diagnostic of possible
inefficiencies of these financial markets.Comment: 13 pages including 5 figures and 1 tabl
Terminal valuations, growth rates and the implied cost of capital
This article is published with open access at Springerlink.comWe develop a model based on the notion that prices lead earnings,
allowing for a simultaneous estimation of the implied growth rate and the cost of
equity capital for US industrial sectors. The major difference between our approach
and that in prior literature is that ours avoids the necessity to make assumptions
about terminal values and consequently about future growth rates. In fact, growth
rates are an endogenous variable, which is estimated simultaneously with the
implied cost of equity capital. Since we require only 1-year-ahead forecasts of
earnings and no assumptions about dividend payouts, our methodology allows us to
estimate ex ante aggregate growth and risk premia over a larger sample of firms than
has previously been possible. Our estimate of the risk premium being between 3.1
and 3.9 % is at the lower end of recent estimates, reflecting the inclusion of these
short-lived companies. Our estimate of the long run growth is from 4.2 to 4.7 %
Implied cost of capital investment strategies - evidence from international stock markets
Investors can generate excess returns by implementing trading strategies based on publicly available equity analyst forecasts. This paper captures the information provided by analysts by the implied cost of capital (ICC), the internal rate of return that equates a firm's share price to the present value of analysts' earnings forecasts.
We find that U.S. stocks with a high ICC outperform low ICC stocks on average by 6.0% per year. This spread is significant when controlling the investment returns for
their risk exposure as proxied by standard pricing models. Further analysis across the world's largest equity markets validates these results
Which market protocols facilitate fair trading?
The evaluation of an exchange market is a multi-faceted problem. An important criterion is the ability to achieve allocative efficiency. Gode and Sunder (1993) shows that a continuous double auction for singleunit trades leads to an efficient allocation even when the traders exhibit “zero-intelligence”; in other words, market protocols are active contributors in the search for a better outcome. Under reasonable circumstances, most of the commonly used market protocols share the ability to help traders discover an efficient allocation
Adaptive-Aggressive Traders Don't Dominate
For more than a decade Vytelingum's Adaptive-Aggressive (AA) algorithm has
been recognized as the best-performing automated auction-market trading-agent
strategy currently known in the AI/Agents literature; in this paper, we
demonstrate that it is in fact routinely outperformed by another algorithm when
exhaustively tested across a sufficiently wide range of market scenarios. The
novel step taken here is to use large-scale compute facilities to brute-force
exhaustively evaluate AA in a variety of market environments based on those
used for testing it in the original publications. Our results show that even in
these simple environments AA is consistently out-performed by IBM's GDX
algorithm, first published in 2002. We summarize here results from more than
one million market simulation experiments, orders of magnitude more testing
than was reported in the original publications that first introduced AA. A 2019
ICAART paper by Cliff claimed that AA's failings were revealed by testing it in
more realistic experiments, with conditions closer to those found in real
financial markets, but here we demonstrate that even in the simple experiment
conditions that were used in the original AA papers, exhaustive testing shows
AA to be outperformed by GDX. We close this paper with a discussion of the
methodological implications of our work: any results from previous papers where
any one trading algorithm is claimed to be superior to others on the basis of
only a few thousand trials are probably best treated with some suspicion now.
The rise of cloud computing means that the compute-power necessary to subject
trading algorithms to millions of trials over a wide range of conditions is
readily available at reasonable cost: we should make use of this; exhaustive
testing such as is shown here should be the norm in future evaluations and
comparisons of new trading algorithms.Comment: To be published as a chapter in "Agents and Artificial Intelligence"
edited by Jaap van den Herik, Ana Paula Rocha, and Luc Steels; forthcoming
2019/2020. 24 Pages, 1 Figure, 7 Table
- …
