78 research outputs found
Multi-taper S-transform method for estimating Wigner-Ville and Loève spectra of quasi-stationary harmonizable processes
Current non-stationary load models based on the evolutionary power spectral density (EPSD) may lead to overestimation and ambiguity of structural responses. The quasi-stationary harmonizable process with its Wigner-Ville spectrum (WVS) and Loève spectrum, which do not suffer from the deficiencies of EPSD, is suitable for modeling non-stationary loads and analyzing their induced structural responses. In this study, the multi-taper S-transform (MTST) method for estimating WVS and Loève spectrum of multi-variate quasi-stationary harmonizable processes is presented. The analytical biases and variances of the WVS, Loève spectrum, and time-invariant and time-varying coherence estimators from the MTST method are provided under the assumption that the target multi-variate harmonizable process is Gaussian. Using a numerical case of a bivariate harmonizable wind speed process, the superiority and reliability of the MTST method are demonstrated through comparisons with several existing methods for the WVS and Loève spectrum estimations. Finally, the MTST method is applied to two pieces of ground motion acceleration records measured during the Turkey earthquake in 2023
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Exact and approximate Strang-Fix conditions to reconstruct signals with finite rate of innovation from samples taken with arbitrary kernels
In the last few years, several new methods have been developed for the sampling and
exact reconstruction of specific classes of non-bandlimited signals known as signals with finite rate of innovation (FRI). This is achieved by using adequate sampling kernels and
reconstruction schemes. An example of valid kernels, which we use throughout the thesis,
is given by the family of exponential reproducing functions. These satisfy the generalised
Strang-Fix conditions, which ensure that proper linear combinations of the kernel with its
shifted versions reproduce polynomials or exponentials exactly.
The first contribution of the thesis is to analyse the behaviour of these kernels in the
case of noisy measurements in order to provide clear guidelines on how to choose the exponential
reproducing kernel that leads to the most stable reconstruction when estimating
FRI signals from noisy samples. We then depart from the situation in which we can choose
the sampling kernel and develop a new strategy that is universal in that it works with any
kernel. We do so by noting that meeting the exact exponential reproduction condition is
too stringent a constraint. We thus allow for a controlled error in the reproduction formula
in order to use the exponential reproduction idea with arbitrary kernels and develop
a universal reconstruction method which is stable and robust to noise.
Numerical results validate the various contributions of the thesis and in particular show
that the approximate exponential reproduction strategy leads to more stable and accurate
reconstruction results than those obtained when using the exact recovery methods.Open Acces
Schur Averages in Random Matrix Ensembles
The main focus of this PhD thesis is the study of minors of Toeplitz, Hankel and Toeplitz±Hankel matrices. These can be expressed as matrix models over the classical Lie groups G(N) = U(N); Sp(2N);O(2N);O(2N + 1), with the insertion of irreducible characters associated to each of the groups. In order to approach this topic, we consider matrices generated by formal power series in terms of symmetric functions.
We exploit these connections to obtain several relations between the models over the different groups G(N), and to investigate some of their structural properties. We compute explicitly several objects of interest, including a variety of matrix models, evaluations of certain skew Schur polynomials, partition functions and Wilson loops of G(N) Chern-Simons theory on S3, and fermion quantum models with matrix degrees of freedom. We also explore the connection with orthogonal polynomials, and study the large N behaviour of the average of a characteristic polynomial in the Laguerre Unitary Ensemble by means of the associated Riemann-Hilbert problem.
We gratefully acknowledge the support of the Fundação para a Ciência e a Tecnologia through its LisMath scholarship PD/BD/113627/2015, which made this work possible
- …