512 research outputs found
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Analysis of tidal flows through the Strait of Gibraltar using Dynamic Mode Decomposition
The Strait of Gibraltar is a region characterized by intricate oceanic
sub-mesoscale features, influenced by topography, tidal forces, instabilities,
and nonlinear hydraulic processes, all governed by the nonlinear equations of
fluid motion. In this study, we aim to uncover the underlying physics of these
phenomena within 3D MIT general circulation model simulations, including waves,
eddies, and gyres. To achieve this, we employ Dynamic Mode Decomposition (DMD)
to break down simulation snapshots into Koopman modes, with distinct
exponential growth/decay rates and oscillation frequencies. Our objectives
encompass evaluating DMD's efficacy in capturing known features, unveiling new
elements, ranking modes, and exploring order reduction. We also introduce
modifications to enhance DMD's robustness, numerical accuracy, and robustness
of eigenvalues. DMD analysis yields a comprehensive understanding of flow
patterns, internal wave formation, and the dynamics of the Strait of Gibraltar,
its meandering behaviors, and the formation of a secondary gyre, notably the
Western Alboran Gyre, as well as the propagation of Kelvin and coastal-trapped
waves along the African coast. In doing so, it significantly advances our
comprehension of intricate oceanographic phenomena and underscores the immense
utility of DMD as an analytical tool for such complex datasets, suggesting that
DMD could serve as a valuable addition to the toolkit of oceanographers
Factor-guided functional PCA for high-dimensional functional data
The literature on high-dimensional functional data focuses on either the
dependence over time or the correlation among functional variables. In this
paper, we propose a factor-guided functional principal component analysis
(FaFPCA) method to consider both temporal dependence and correlation of
variables so that the extracted features are as sufficient as possible. In
particular, we use a factor process to consider the correlation among
high-dimensional functional variables and then apply functional principal
component analysis (FPCA) to the factor processes to address the dependence
over time. Furthermore, to solve the computational problem arising from
triple-infinite dimensions, we creatively build some moment equations to
estimate loading, scores and eigenfunctions in closed form without rotation.
Theoretically, we establish the asymptotical properties of the proposed
estimator. Extensive simulation studies demonstrate that our proposed method
outperforms other competitors in terms of accuracy and computational cost. The
proposed method is applied to analyze the Alzheimer's Disease Neuroimaging
Initiative (ADNI) dataset, resulting in higher prediction accuracy and 41
important ROIs that are associated with Alzheimer's disease, 23 of which have
been confirmed by the literature.Comment: 34 pages, 5 figures, 3 table
Implementation of a condition monitoring strategy for the Monastery of Salzedas, Portugal: challenges and optimisation
The implementation of condition monitoring for damage identification and the generation of a reliable digital twin are essential elements of preventive conservation. The application of this promising approach to Cultural Heritage (CH) sites is deemed truly beneficial, constituting a minimally invasive mitigation strategy and a cost-effective decision-making tool. In this light, the present work focuses on establishing an informative virtual model as a platform for the conservation of the monastery of Santa Maria de Salzedas, a CH building located in the north of Portugal. The platform is the first step towards the generation of the digital twin and is populated with existing documentation as well as new information collected within the scope of an inspection and diagnosis programme. At this stage, the virtual model encompasses the main cloister, whose structural condition and safety raised concerns in the past and required the implementation of urgent remedial measures. In the definition of a vibration-based condition monitoring strategy for the south wing of the cloister, five modes were identified by carrying out an extensive dynamic identification. Nonetheless, significant challenges emerged due to the low amplitude of the ambient-induced vibrations and the intrusiveness of the activities. To this end, a data-driven Optimal Sensor Placement (OSP) approach was followed, testing and comparing five heuristic methods to define a good trade-off between the number of sensors and the quality of the collected information. The results showed that these algorithms for OSP allow the selection of sensor locations with good signal strength.This work was partly financed by FCT/MCTES through national funds (PIDDAC) under the R&D Unit Institute for Sustainability and Innovation in Structural Engineering (ISISE), under reference UIDB/04029/2020, and under the Associate Laboratory Advanced Production and Intelligent Systems ARISE, under reference LA/P/0112/2020
Operational modal analysis and continuous dynamic monitoring of footbridges
Tese de doutoramento. Engenharia Civil. Universidade do Porto. Faculdade de Engenharia. 201
Meson Photo-Couplings From Lattice Quantum Chromodynamics
We explore the calculation of three-point functions featuring a vector current insertion in lattice Quantum Chromodynamics. These three-point functions, in general, contain information about many radiative transition matrix elements simultaneously. We develop and implement the technology necessary to isolate a single matrix element via the use of optimized operators, operators designed to interpolate a single meson eigenstate, which are constructed as variationally optimized linear combination of meson interpolating fields within a large basis. In order to frame the results we also explore some well known phenomenology arising within the context of the constituent quark model before transitioning to a lattice calculation of the spectrum of isovector mesons in a version of QCD featuring three flavors of quarks all tuned to approximately the physical strange quark mass. We then proceed to calculate radiative transition matrix elements for the lightest few isovector pseudoscalar and vector particles. The dependence of these form factors and transitions on the photon virtuality is extracted and some model intuitions are explored
Recommended from our members
Factor Analysis of Data Matrices: New Theoretical and Computational Aspects With Applications
The classical fitting problem in exploratory factor analysis (EFA) is to find estimates for the factor loadings matrix and the matrix of unique factor variances which give the best fit to the sample covariance or correlation matrix with respect to some goodness-of-fit criterion. Predicted factor scores can be obtained as a function of these estimates and the data. In this thesis, the EFA model is considered as a specific data matrix decomposition with fixed unknown matrix parameters. Fitting the EFA model directly to the data yields simultaneous solutions for both loadings and factor scores. Several new algorithms are introduced for the least squares and weighted least squares estimation of all EFA model unknowns. The numerical procedures are based on the singular value decomposition, facilitate the estimation of both common and unique factor scores, and work equally well when the number of variables exceeds the number of available observations.
Like EFA, noisy independent component analysis (ICA) is a technique for reduction of the data dimensionality in which the interrelationships among the observed variables are explained in terms of a much smaller number of latent factors. The key difference between EFA and noisy ICA is that in the latter model the common factors are assumed to be both independent and non-normal. In contrast to EFA, there is no rotational indeterminacy in noisy ICA. In this thesis, noisy ICA is viewed as a method of factor rotation in EFA. Starting from an initial EFA solution, an orthogonal rotation matrix is sought that minimizes the dependence between the common factors. The idea of rotating the scores towards independence is also employed in three-mode factor analysis to analyze data sets having a three-way structure.
The new theoretical and computational aspects contained in this thesis are illustrated by means of several examples with real and artificial data
Blind source separation for clutter and noise suppression in ultrasound imaging:review for different applications
Blind source separation (BSS) refers to a number of signal processing techniques that decompose a signal into several 'source' signals. In recent years, BSS is increasingly employed for the suppression of clutter and noise in ultrasonic imaging. In particular, its ability to separate sources based on measures of independence rather than their temporal or spatial frequency content makes BSS a powerful filtering tool for data in which the desired and undesired signals overlap in the spectral domain. The purpose of this work was to review the existing BSS methods and their potential in ultrasound imaging. Furthermore, we tested and compared the effectiveness of these techniques in the field of contrast-ultrasound super-resolution, contrast quantification, and speckle tracking. For all applications, this was done in silico, in vitro, and in vivo. We found that the critical step in BSS filtering is the identification of components containing the desired signal and highlighted the value of a priori domain knowledge to define effective criteria for signal component selection
- …