42 research outputs found
Is It Real, or Is It Randomized?: A Financial Turing Test
We construct a financial "Turing test" to determine whether human subjects
can differentiate between actual vs. randomized financial returns. The
experiment consists of an online video-game (http://arora.ccs.neu.edu) where
players are challenged to distinguish actual financial market returns from
random temporal permutations of those returns. We find overwhelming statistical
evidence (p-values no greater than 0.5%) that subjects can consistently
distinguish between the two types of time series, thereby refuting the
widespread belief that financial markets "look random." A key feature of the
experiment is that subjects are given immediate feedback regarding the validity
of their choices, allowing them to learn and adapt. We suggest that such novel
interfaces can harness human capabilities to process and extract information
from financial data in ways that computers cannot.Comment: 12 pages, 6 figure
A Computational View of Market Efficiency
We propose to study market efficiency from a computational viewpoint.
Borrowing from theoretical computer science, we define a market to be
\emph{efficient with respect to resources } (e.g., time, memory) if no
strategy using resources can make a profit. As a first step, we consider
memory- strategies whose action at time depends only on the previous
observations at times . We introduce and study a simple model of
market evolution, where strategies impact the market by their decision to buy
or sell. We show that the effect of optimal strategies using memory can
lead to "market conditions" that were not present initially, such as (1) market
bubbles and (2) the possibility for a strategy using memory to make a
bigger profit than was initially possible. We suggest ours as a framework to
rationalize the technological arms race of quantitative trading firms
Life-cycle economics with macroeconomic shocks
This thesis employs multi-period overlapping generations (OLG)
models with aggregate risk to study questions at the interface
of macroeconomics, public finance, and finance.
The first chapter addresses the equity premium puzzle. The
equity premium puzzle refers to the inability of standard
models to reproduce the large difference, on average, between
the returns to risky and safe assets observed in the data.
First proposed three decades ago, this puzzle has remained a
challenge to economics. A large literature has tried to resolve
it using complex machinery, such as nonstandard preferences or
risk structures. I show that to resolve the puzzle it suffices
to impose increasing marginal costs of borrowing in an
otherwise standard OLG model.
In the second chapter, co-authored with Laurence Kotlikoff, we
quantify the size of generational risk in an 80-period OLG
model for the first time. Generational risk is the extent to
which aggregate shocks are spread across contemporaneous
generations. Under prefect risk-sharing, the consumption of all
generations changes by the same percentage when a shock hits.
The deviation from that perfect risk-sharing world is a measure
of generational risk. Contrary to standard assumptions in the
literature, we find that generational risk is small and that
government policy can easily exacerbate it. We also show that a
bond market can mitigate risk-inducing policy.
In the third chapter, also co-authored with Kotlikoff, we
consider a long-standing problem: how to value government
obligations when markets are incomplete. Our approach consists
of determining the current wealth equivalent of future
government promises using the change in remaining expected
lifetime utility converted into current consumption units. We
find that discount rates for policies involving sure payments
each period to the elderly aren't uniform over time or agents
of different cohorts. They also depend on the size of the
payments and attendant general equilibrium effects. For
infinitesimal promises, the discount rates are remarkably close
to the prevailing short-term interest rate.
A technical contribution of this thesis is solving models like
the above for the first time, by employing and extending Judd,
Maliar, and Maliar (2011)
Investments unwrapped : demystifying and automating technical analysis and hedge-fund strategies
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 563-570).In this thesis we use nonlinear and linear estimation techniques to model two common investment strategies: hedge funds and technical analysis. Our models provide transparent and low-cost alternatives to these two nontransparent, and in some cases prohibitively costly, financial approaches. In the case of hedge funds, we estimate linear factor models to create passive replicating portfolios of common exchange-traded instruments, that provide similar risk exposures as hedge funds, but at lower cost and with greater transparency. While the performance of linear clones is generally inferior to their hedge-fund counterparts, in some cases the clones perform well enough to warrant serious consideration as low-cost passive alternatives to hedge funds. In the case of technical analysis - also known as "charting" - we develop an algorithm based on neural networks that formalizes and automates the highly subjective technical practice of detecting, with the naked eye, certain geometric patterns that appear on price charts and that are believed to have predictive value. We then evaluate the predictive ability of these technical patterns by applying our algorithm to stocks and exchange rates data for a number of stocks and currencies over many time periods, and comparing the unconditional distribution of returns to the return distribution conditional on the occurrence of technical patterns.(cont.) We find that several technical patterns do provide incremental information, suggesting that technical analysis may add value to the investment process. To further demystify the highly controversial practice of technical analysis, we complement our implementation and validation study with a historical overview of the field and interviews with its leading practitioners.by Jasmina Hasanhodzic.Ph.D
Neural network based pattern recognition of technical trading indicators, statistical evaluation of their predictive value and an historical overview of the field
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 149-156).We revisit the kernel regression based pattern recognition algorithm designed by Lo, Mamaysky, and Wang (2000) to extract nonlinear patterns from the noisy price data, and develop an analogous neural network based one. We argue that, given the natural flexibility of neural network models and the extent of parallel processing that they allow, our algorithm is a step forward in the automation of technical analysis. More importantly, following the approach proposed by Lo, Mamaysky, and Wang, we apply our neural network based model to examine empirically the ability of the patterns under consideration to add value to the investment process. We discover overwhelming support for the validity of these indicators, just like Lo, Mamaysky, and Wang do. Moreover, this basic conclusion appears to remain valid across different levels of smoothing and insensitive to the nuances of pattern definitions present in the technical analysis literature. This confirms that Lo, Mamaysky, and Wang's results are not an artifact of their kernel regression model, and suggests that the kinds of nonlinearities that technical indicators are designed to capture constitute some underlying properties of the financial time series itself. Finally, we complement our empirical analysis with a historical one, focusing on the origins of trading and speculation in general, and technical analysis in particular.by Jasmina Hasanhodzic.S.M
Simulating the Blanchard Conjecture in a Multiperiod Life Cycle Model
In recent writings, Olivier Blanchard has suggested that when the safe rate on government debt is less that the economy's growth rate, additional deficit-financed US federal spending would come at no cost to any future generation and benefits to some. This paper studies this question in a ten-period OLG CGE model with aggregate risk, whose safe rate averages -2 percent annually and growth rate is 0. It shows that welfare losses to future generations resulting from the introduction of pay-go social security, financed with a 15 percent payroll tax, are roughly 20 percent measured as a compensating variation relative to no policy. </jats:p
Increasing Borrowing Costs and the Equity Premium
Simulating a realistic-sized equity premium in macroeconomic models has proved a daunt-ing challenge, hence the “equity premium puzzle”. “Resolving ” the puzzle requires heavy lifting. Precise choices of particular preferences, shocks, technologies, and hard borrowing constraints can do the trick, but haven’t stopped the search for a simpler and more robust solution. This paper suggests that soft borrowing costs, which are rapidly rising in the amount borrowed, imbedded in an otherwise standard OLG model can work. Its model features isoe-lastic preferences with modest risk aversion, Cobb-Douglas production, realistic shocks, and reasonable fiscal policy. Absent borrowing costs, the model’s equity premium is extremely small. Adding the costs readily produces large equity premiums. These results echo, but also differ from those of Constantinides, Donaldson, and Mehra (2002). In their model, hard borrowing constraints on the young can produce large equity premiums. Here soft, but rising borrowing costs on all generations are needed. The solution method builds on Hasanhodzic and Kotlikoff (2013), which uses Marce