65 research outputs found
Graph Spectral Image Processing
Recent advent of graph signal processing (GSP) has spurred intensive studies
of signals that live naturally on irregular data kernels described by graphs
(e.g., social networks, wireless sensor networks). Though a digital image
contains pixels that reside on a regularly sampled 2D grid, if one can design
an appropriate underlying graph connecting pixels with weights that reflect the
image structure, then one can interpret the image (or image patch) as a signal
on a graph, and apply GSP tools for processing and analysis of the signal in
graph spectral domain. In this article, we overview recent graph spectral
techniques in GSP specifically for image / video processing. The topics covered
include image compression, image restoration, image filtering and image
segmentation
Scalable Randomized Kernel Methods for Multiview Data Integration and Prediction
We develop scalable randomized kernel methods for jointly associating data
from multiple sources and simultaneously predicting an outcome or classifying a
unit into one of two or more classes. The proposed methods model nonlinear
relationships in multiview data together with predicting a clinical outcome and
are capable of identifying variables or groups of variables that best
contribute to the relationships among the views. We use the idea that random
Fourier bases can approximate shift-invariant kernel functions to construct
nonlinear mappings of each view and we use these mappings and the outcome
variable to learn view-independent low-dimensional representations. Through
simulation studies, we show that the proposed methods outperform several other
linear and nonlinear methods for multiview data integration. When the proposed
methods were applied to gene expression, metabolomics, proteomics, and
lipidomics data pertaining to COVID-19, we identified several molecular
signatures forCOVID-19 status and severity. Results from our real data
application and simulations with small sample sizes suggest that the proposed
methods may be useful for small sample size problems. Availability: Our
algorithms are implemented in Pytorch and interfaced in R and would be made
available at: https://github.com/lasandrall/RandMVLearn.Comment: 24 pages, 5 figures, 4 table
A multi-class classification model with parametrized target outputs for randomized-based feedforward neural networks
Randomized-based Feedforward Neural Networks approach regression and classification (binary and
multi-class) problems by minimizing the same optimization problem. Specifically, the model parameters are determined through the ridge regression estimator of the patterns projected in the hidden
layer space (randomly generated in its neural network version) for models without direct links and
the patterns projected in the hidden layer space along with the original input data for models with
direct links. The targets are encoded for the multi-class classification problem according to the 1-
of-J encoding (J the number of classes), which implies that the model parameters are estimated to
project all the patterns belonging to its corresponding class to one and the remaining to zero. This
approach has several drawbacks, which motivated us to propose an alternative optimization model
for the framework. In the proposed optimization model, model parameters are estimated for each
class so that their patterns are projected to a reference point (also optimized during the process),
whereas the remaining patterns (not belonging to that class) are projected as far away as possible from
the reference point. The final problem is finally presented as a generalized eigenvalue problem. Four
models are then presented: the neural network version of the algorithm and its corresponding kernel
version for the neural networks models with and without direct links. In addition, the optimization
model has also been implemented in randomization-based multi-layer or deep neural networks. The
empirical results obtained by the proposed models were compared to those reported by state-ofthe-art models in the correct classification rate and a separability index (which measures the degree
of separability in projection terms per class of the patterns belonging to the class of the others).
The proposed methods show very competitive performance in the separability index and prediction
accuracy compared to the neural networks version of the comparison methods (with and without
direct links). Remarkably, the model provides significantly superior performance in deep models with
direct links compared to its deep model counterpart
A multi-class classification model with parametrized target outputs for randomized-based feedforward neural networks.
Randomized-based Feedforward Neural Networks approach regression and classification (binary and multi-class) problems by minimizing the same optimization problem. Specifically, the model parameters are determined through the ridge regression estimator of the patterns projected in the hidden layer space (randomly generated in its neural network version) for models without direct links and the patterns projected in the hidden layer space along with the original input data for models with direct links. The targets are encoded for the multi-class classification problem according to the 1-of- encoding ( the number of classes), which implies that the model parameters are estimated to project all the patterns belonging to its corresponding class to one and the remaining to zero. This approach has several drawbacks, which motivated us to propose an alternative optimization model for the framework. In the proposed optimization model, model parameters are estimated for each class so that their patterns are projected to a reference point (also optimized during the process), whereas the remaining patterns (not belonging to that class) are projected as far away as possible from the reference point. The final problem is finally presented as a generalized eigenvalue problem. Four models are then presented: the neural network version of the algorithm and its corresponding kernel version for the neural networks models with and without direct links. In addition, the optimization model has also been implemented in randomization-based multi-layer or deep neural networks.Funding for open access charge: Universidad de Málaga / CBU
Cross-Modality 2D-3D Face Recognition via Multiview Smooth Discriminant Analysis Based on ELM
In recent years, 3D face recognition has attracted increasing attention from worldwide researchers. Rather than homogeneous face data, more and more applications require flexible input face data nowadays. In this paper, we propose a new approach for cross-modality 2D-3D face recognition (FR), which is called Multiview Smooth Discriminant Analysis (MSDA) based on Extreme Learning Machines (ELM). Adding the Laplacian penalty constrain for the multiview feature learning, the proposed MSDA is first proposed to extract the cross-modality 2D-3D face features. The MSDA aims at finding a multiview learning based common discriminative feature space and it can then fully utilize the underlying relationship of features from different views. To speed up the learning phase of the classifier, the recent popular algorithm named Extreme Learning Machine (ELM) is adopted to train the single hidden layer feedforward neural networks (SLFNs). To evaluate the effectiveness of our proposed FR framework, experimental results on a benchmark face recognition dataset are presented. Simulations show that our new proposed method generally outperforms several recent approaches with a fast training speed
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- …