1,872 research outputs found
A First Application of Independent Component Analysis to Extracting Structure from Stock Returns
This paper discusses the application of a modern signal processing technique known as independent
component analysis (ICA) or blind source separation to multivariate financial time series such as a
portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new
space of statistically independent components (ICs). This can be viewed as a factorization of the portfolio
since joint probabilities become simple products in the coordinate system of the ICs.
We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with
those obtained using principal component analysis. The results indicate that the estimated ICs fall into two
categories, (i) infrequent but large shocks (responsible for the major changes in the stock prices), and (ii)
frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall
stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs.
In contrast, when using shocks derived from principal components instead of independent components, the
reconstructed price is less similar to the original one. Independent component analysis is a potentially powerful
method of analyzing and understanding driving mechanisms in financial markets. There are further
promising applications to risk management since ICA focuses on higher-order statistics.Information Systems Working Papers Serie
Correlative Information Maximization Based Biologically Plausible Neural Networks for Correlated Source Separation
The brain effortlessly extracts latent causes of stimuli, but how it does
this at the network level remains unknown. Most prior attempts at this problem
proposed neural networks that implement independent component analysis which
works under the limitation that latent causes are mutually independent. Here,
we relax this limitation and propose a biologically plausible neural network
that extracts correlated latent sources by exploiting information about their
domains. To derive this network, we choose maximum correlative information
transfer from inputs to outputs as the separation objective under the
constraint that the outputs are restricted to their presumed sets. The online
formulation of this optimization problem naturally leads to neural networks
with local learning rules. Our framework incorporates infinitely many source
domain choices and flexibly models complex latent structures. Choices of
simplex or polytopic source domains result in networks with piecewise-linear
activation functions. We provide numerical examples to demonstrate the superior
correlated source separation capability for both synthetic and natural sources.Comment: Preprint, 32 page
Information theoretic approaches to source separation
Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.Includes bibliographical references (p. 82-88).Paris J. Smaragdis.M.S
- …