5,065 research outputs found
Support matrix machine: A review
Support vector machine (SVM) is one of the most studied paradigms in the
realm of machine learning for classification and regression problems. It relies
on vectorized input data. However, a significant portion of the real-world data
exists in matrix format, which is given as input to SVM by reshaping the
matrices into vectors. The process of reshaping disrupts the spatial
correlations inherent in the matrix data. Also, converting matrices into
vectors results in input data with a high dimensionality, which introduces
significant computational complexity. To overcome these issues in classifying
matrix input data, support matrix machine (SMM) is proposed. It represents one
of the emerging methodologies tailored for handling matrix input data. The SMM
method preserves the structural information of the matrix data by using the
spectral elastic net property which is a combination of the nuclear norm and
Frobenius norm. This article provides the first in-depth analysis of the
development of the SMM model, which can be used as a thorough summary by both
novices and experts. We discuss numerous SMM variants, such as robust, sparse,
class imbalance, and multi-class classification models. We also analyze the
applications of the SMM model and conclude the article by outlining potential
future research avenues and possibilities that may motivate academics to
advance the SMM algorithm
Solution Path Algorithm for Twin Multi-class Support Vector Machine
The twin support vector machine and its extensions have made great
achievements in dealing with binary classification problems, however, which is
faced with some difficulties such as model selection and solving
multi-classification problems quickly. This paper is devoted to the fast
regularization parameter tuning algorithm for the twin multi-class support
vector machine. A new sample dataset division method is adopted and the
Lagrangian multipliers are proved to be piecewise linear with respect to the
regularization parameters by combining the linear equations and block matrix
theory. Eight kinds of events are defined to seek for the starting event and
then the solution path algorithm is designed, which greatly reduces the
computational cost. In addition, only few points are combined to complete the
initialization and Lagrangian multipliers are proved to be 1 as the
regularization parameter tends to infinity. Simulation results based on UCI
datasets show that the proposed method can achieve good classification
performance with reducing the computational cost of grid search method from
exponential level to the constant level
A Survey on Feature Selection Algorithms
One major component of machine learning is feature analysis which comprises of mainly two processes: feature selection and feature extraction. Due to its applications in several areas including data mining, soft computing and big data analysis, feature selection has got a reasonable importance. This paper presents an introductory concept of feature selection with various inherent approaches. The paper surveys historic developments reported in feature selection with supervised and unsupervised methods. The recent developments with the state of the art in the on-going feature selection algorithms have also been summarized in the paper including their hybridizations.
DOI: 10.17762/ijritcc2321-8169.16043
- …