301 research outputs found
Tensor Regression
Regression analysis is a key area of interest in the field of data analysis
and machine learning which is devoted to exploring the dependencies between
variables, often using vectors. The emergence of high dimensional data in
technologies such as neuroimaging, computer vision, climatology and social
networks, has brought challenges to traditional data representation methods.
Tensors, as high dimensional extensions of vectors, are considered as natural
representations of high dimensional data. In this book, the authors provide a
systematic study and analysis of tensor-based regression models and their
applications in recent years. It groups and illustrates the existing
tensor-based regression methods and covers the basics, core ideas, and
theoretical characteristics of most tensor-based regression methods. In
addition, readers can learn how to use existing tensor-based regression methods
to solve specific regression tasks with multiway data, what datasets can be
selected, and what software packages are available to start related work as
soon as possible. Tensor Regression is the first thorough overview of the
fundamentals, motivations, popular algorithms, strategies for efficient
implementation, related applications, available datasets, and software
resources for tensor-based regression analysis. It is essential reading for all
students, researchers and practitioners of working on high dimensional data.Comment: 187 pages, 32 figures, 10 table
Employing data fusion & diversity in the applications of adaptive signal processing
The paradigm of adaptive signal processing is a simple yet powerful method for the class of system identification problems. The classical approaches consider standard one-dimensional signals whereby the model can be formulated by flat-view matrix/vector framework. Nevertheless, the rapidly increasing availability of large-scale multisensor/multinode measurement technology has render no longer sufficient the traditional way of representing the data. To this end, the author, who from this point onward shall be referred to as `we', `us', and `our' to signify the author myself and other supporting contributors i.e. my supervisor, my colleagues and other overseas academics specializing in the specific pieces of research endeavor throughout this thesis, has applied the adaptive filtering framework to problems that employ the techniques of data diversity and fusion which includes quaternions, tensors and graphs. At the first glance, all these structures share one common important feature: invertible isomorphism. In other words, they are algebraically one-to-one related in real vector space. Furthermore, it is our continual course of research that affords a segue of all these three data types. Firstly, we proposed novel quaternion-valued adaptive algorithms named the n-moment widely linear quaternion least mean squares (WL-QLMS) and c-moment WL-LMS. Both are as fast as the recursive-least-squares method but more numerically robust thanks to the lack of matrix inversion. Secondly, the adaptive filtering method is applied to a more complex task: the online tensor dictionary learning named online multilinear dictionary learning (OMDL). The OMDL is partly inspired by the derivation of the c-moment WL-LMS due to its parsimonious formulae. In addition, the sequential higher-order compressed sensing (HO-CS) is also developed to couple with the OMDL to maximally utilize the learned dictionary for the best possible compression. Lastly, we consider graph random processes which actually are multivariate random processes with spatiotemporal (or vertex-time) relationship. Similar to tensor dictionary, one of the main challenges in graph signal processing is sparsity constraint in the graph topology, a challenging issue for online methods. We introduced a novel splitting gradient projection into this adaptive graph filtering to successfully achieve sparse topology. Extensive experiments were conducted to support the analysis of all the algorithms proposed in this thesis, as well as pointing out potentials, limitations and as-yet-unaddressed issues in these research endeavor.Open Acces
Non-Markovian Quantum Process Tomography
Characterisation protocols have so far played a central role in the
development of noisy intermediate-scale quantum (NISQ) computers capable of
impressive quantum feats. This trajectory is expected to continue in building
the next generation of devices: ones that can surpass classical computers for
particular tasks -- but progress in characterisation must keep up with the
complexities of intricate device noise. A missing piece in the zoo of
characterisation procedures is tomography which can completely describe
non-Markovian dynamics. Here, we formally introduce a generalisation of quantum
process tomography, which we call process tensor tomography. We detail the
experimental requirements, construct the necessary post-processing algorithms
for maximum-likelihood estimation, outline the best-practice aspects for
accurate results, and make the procedure efficient for low-memory processes.
The characterisation is the pathway to diagnostics and informed control of
correlated noise. As an example application of the technique, we improve
multi-time circuit fidelities on IBM Quantum devices for both standalone qubits
and in the presence of crosstalk to a level comparable with the fault-tolerant
noise threshold in a variety of different noise conditions. Our methods could
form the core for carefully developed software that may help hardware
consistently pass the fault-tolerant noise threshold
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Volatility modeling and limit-order book analytics with high-frequency data
The vast amount of information characterizing nowadays’s high-frequency financial datasets poses both opportunities and challenges. Among the opportunities, existing methods can be employed to provide new insights and better understanding of market’s complexity under different perspectives, while new methods, capable of fully-exploit all the information embedded in high-frequency datasets and addressing new issues, can be devised. Challenges are driven by data complexity: limit-order book datasets constitute of hundreds of thousands of events, interacting with each other, and affecting the event-flow dynamics.
This dissertation aims at improving our understanding over the effective applicability of machine learning methods for mid-price movement prediction, over the nature of long-range autocorrelations in financial time-series, and over the econometric modeling and forecasting of volatility dynamics in high-frequency settings. Our results show that simple machine learning methods can be successfully employed for mid-price forecasting, moreover adopting methods that rely on the natural tensorrepresentation of financial time series, inter-temporal connections captured by this convenient representation are shown to be of relevance for the prediction of future mid-price movements. Furthermore, by using ultra-high-frequency order book data over a considerably long period, a quantitative characterization of the long-range autocorrelation is achieved by extracting the so-called scaling exponent. By jointly considering duration series of both inter- and cross- events, for different stocks, and separately for the bid and ask side, long-range autocorrelations are found to be ubiquitous and qualitatively homogeneous. With respect to the scaling exponent, evidence of three cross-overs is found, and complex heterogeneous associations with a number of relevant economic variables discussed. Lastly, the use of copulas as the main ingredient for modeling and forecasting realized measures of volatility is explored. The modeling background resembles but generalizes, the well-known Heterogeneous Autoregressive (HAR) model. In-sample and out-of-sample analyses, based on several performance measures, statistical tests, and robustness checks, show forecasting improvements of copula-based modeling over the HAR benchmark
Recommended from our members
Tensor Analysis and the Dynamics of Motor Cortex
Neural data often span multiple indices, such as neuron, experimental condition, trial, and time, resulting in a tensor or multidimensional array. Standard approaches to neural data analysis often rely on matrix factorization techniques, such as principal component analysis or nonnegative matrix factorization. Any inherent tensor structure in the data is lost when flattened into a matrix. Here, we analyze datasets from primary motor cortex from the perspective of tensor analysis, and develop a theory for how tensor structure relates to certain computational properties of the underlying system. Applied to the motor cortex datasets, we reveal that neural activity is best described by condition-independent dynamics as opposed to condition-dependent relations to external movement variables. Motivated by this result, we pursue one further tensor-related analysis, and two further dynamical systems-related analyses. First, we show how tensor decompositions can be used to denoise neural signals. Second, we apply system identification to the cortex- to-muscle transformation to reveal the intermediate spinal dynamics. Third, we fit recurrent neural networks to muscle activations and show that the geometric properties observed in motor cortex are naturally recapitulated in the network model. Taken together, these results emphasize (on the data analysis side) the role of tensor structure in data and (on the theoretical side) the role of motor cortex as a dynamical system
- …