1,170 research outputs found
Sparse Linear Models applied to Power Quality Disturbance Classification
Power quality (PQ) analysis describes the non-pure electric signals that are
usually present in electric power systems. The automatic recognition of PQ
disturbances can be seen as a pattern recognition problem, in which different
types of waveform distortion are differentiated based on their features.
Similar to other quasi-stationary signals, PQ disturbances can be decomposed
into time-frequency dependent components by using time-frequency or time-scale
transforms, also known as dictionaries. These dictionaries are used in the
feature extraction step in pattern recognition systems. Short-time Fourier,
Wavelets and Stockwell transforms are some of the most common dictionaries used
in the PQ community, aiming to achieve a better signal representation. To the
best of our knowledge, previous works about PQ disturbance classification have
been restricted to the use of one among several available dictionaries. Taking
advantage of the theory behind sparse linear models (SLM), we introduce a
sparse method for PQ representation, starting from overcomplete dictionaries.
In particular, we apply Group Lasso. We employ different types of
time-frequency (or time-scale) dictionaries to characterize the PQ
disturbances, and evaluate their performance under different pattern
recognition algorithms. We show that the SLM reduce the PQ classification
complexity promoting sparse basis selection, and improving the classification
accuracy
Sparse Bilinear Logistic Regression
In this paper, we introduce the concept of sparse bilinear logistic
regression for decision problems involving explanatory variables that are
two-dimensional matrices. Such problems are common in computer vision,
brain-computer interfaces, style/content factorization, and parallel factor
analysis. The underlying optimization problem is bi-convex; we study its
solution and develop an efficient algorithm based on block coordinate descent.
We provide a theoretical guarantee for global convergence and estimate the
asymptotical convergence rate using the Kurdyka-{\L}ojasiewicz inequality. A
range of experiments with simulated and real data demonstrate that sparse
bilinear logistic regression outperforms current techniques in several
important applications.Comment: 27 pages, 5 figure
A sparse multinomial probit model for classification
A recent development in penalized probit modelling using a hierarchical Bayesian approach has led to a sparse binomial (two-class) probit classifier that can be trained via an EM algorithm. A key advantage of the formulation is that no tuning of hyperparameters relating to the penalty is needed thus simplifying the model selection process. The resulting model demonstrates excellent classification performance and a high degree of sparsity when used as a kernel machine. It is, however, restricted to the binary classification problem and can only be used in the multinomial situation via a one-against-all or one-against-many strategy. To overcome this, we apply the idea to the multinomial probit model. This leads to a direct multi-classification approach and is shown to give a sparse solution with accuracy and sparsity comparable with the current state-of-the-art. Comparative numerical benchmark examples are used to demonstrate the method
Sparse multinomial kernel discriminant analysis (sMKDA)
Dimensionality reduction via canonical variate analysis (CVA) is important for pattern recognition and has been extended variously to permit more flexibility, e.g. by "kernelizing" the formulation. This can lead to over-fitting, usually ameliorated by regularization. Here, a method for sparse, multinomial kernel discriminant analysis (sMKDA) is proposed, using a sparse basis to control complexity. It is based on the connection between CVA and least-squares, and uses forward selection via orthogonal least-squares to approximate a basis, generalizing a similar approach for binomial problems. Classification can be performed directly via minimum Mahalanobis distance in the canonical variates. sMKDA achieves state-of-the-art performance in terms of accuracy and sparseness on 11 benchmark datasets
Interpretable Low-Rank Document Representations with Label-Dependent Sparsity Patterns
In context of document classification, where in a corpus of documents their
label tags are readily known, an opportunity lies in utilizing label
information to learn document representation spaces with better discriminative
properties. To this end, in this paper application of a Variational Bayesian
Supervised Nonnegative Matrix Factorization (supervised vbNMF) with
label-driven sparsity structure of coefficients is proposed for learning of
discriminative nonsubtractive latent semantic components occuring in TF-IDF
document representations. Constraints are such that the components pursued are
made to be frequently occuring in a small set of labels only, making it
possible to yield document representations with distinctive label-specific
sparse activation patterns. A simple measure of quality of this kind of
sparsity structure, dubbed inter-label sparsity, is introduced and
experimentally brought into tight connection with classification performance.
Representing a great practical convenience, inter-label sparsity is shown to be
easily controlled in supervised vbNMF by a single parameter
- âŠ