362 research outputs found
A First Application of Independent Component Analysis to Extracting Structure from Stock Returns
This paper discusses the application of a modern signal processing technique known as independent
component analysis (ICA) or blind source separation to multivariate financial time series such as a
portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new
space of statistically independent components (ICs). This can be viewed as a factorization of the portfolio
since joint probabilities become simple products in the coordinate system of the ICs.
We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with
those obtained using principal component analysis. The results indicate that the estimated ICs fall into two
categories, (i) infrequent but large shocks (responsible for the major changes in the stock prices), and (ii)
frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall
stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs.
In contrast, when using shocks derived from principal components instead of independent components, the
reconstructed price is less similar to the original one. Independent component analysis is a potentially powerful
method of analyzing and understanding driving mechanisms in financial markets. There are further
promising applications to risk management since ICA focuses on higher-order statistics.Information Systems Working Papers Serie
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Ecosystem Monitoring and Port Surveillance Systems
International audienceIn this project, we should build up a novel system able to perform a sustainable and long term monitoring coastal marine ecosystems and enhance port surveillance capability. The outcomes will be based on the analysis, classification and the fusion of a variety of heterogeneous data collected using different sensors (hydrophones, sonars, various camera types, etc). This manuscript introduces the identified approaches and the system structure. In addition, it focuses on developed techniques and concepts to deal with several problems related to our project. The new system will address the shortcomings of traditional approaches based on measuring environmental parameters which are expensive and fail to provide adequate large-scale monitoring. More efficient monitoring will also enable improved analysis of climate change, and provide knowledge informing the civil authority's economic relationship with its coastal marine ecosystems
Convolutive Blind Source Separation Methods
In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks
- …