1,002 research outputs found
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Joint Source and Relay Precoding Designs for MIMO Two-Way Relaying Based on MSE Criterion
Properly designed precoders can significantly improve the spectral efficiency
of multiple-input multiple-output (MIMO) relay systems. In this paper, we
investigate joint source and relay precoding design based on the
mean-square-error (MSE) criterion in MIMO two-way relay systems, where two
multi-antenna source nodes exchange information via a multi-antenna
amplify-and-forward relay node. This problem is non-convex and its optimal
solution remains unsolved. Aiming to find an efficient way to solve the
problem, we first decouple the primal problem into three tractable
sub-problems, and then propose an iterative precoding design algorithm based on
alternating optimization. The solution to each sub-problem is optimal and
unique, thus the convergence of the iterative algorithm is guaranteed.
Secondly, we propose a structured precoding design to lower the computational
complexity. The proposed precoding structure is able to parallelize the
channels in the multiple access (MAC) phase and broadcast (BC) phase. It thus
reduces the precoding design to a simple power allocation problem. Lastly, for
the special case where only a single data stream is transmitted from each
source node, we present a source-antenna-selection (SAS) based precoding design
algorithm. This algorithm selects only one antenna for transmission from each
source and thus requires lower signalling overhead. Comprehensive simulation is
conducted to evaluate the effectiveness of all the proposed precoding designs.Comment: 32 pages, 10 figure
- …