1,770 research outputs found
A Novel Progressive Multi-label Classifier for Classincremental Data
In this paper, a progressive learning algorithm for multi-label
classification to learn new labels while retaining the knowledge of previous
labels is designed. New output neurons corresponding to new labels are added
and the neural network connections and parameters are automatically
restructured as if the label has been introduced from the beginning. This work
is the first of the kind in multi-label classifier for class-incremental
learning. It is useful for real-world applications such as robotics where
streaming data are available and the number of labels is often unknown. Based
on the Extreme Learning Machine framework, a novel universal classifier with
plug and play capabilities for progressive multi-label classification is
developed. Experimental results on various benchmark synthetic and real
datasets validate the efficiency and effectiveness of our proposed algorithm.Comment: 5 pages, 3 figures, 4 table
Face Recognition Under Varying Illumination
This study is a result of a successful joint-venture with my adviser Prof. Dr. Muhittin Gökmen. I am thankful to him for his continuous assistance on preparing this project. Special thanks to the assistants of the Computer Vision Laboratory for their steady support and help in many topics related with the project
Population structure-learned classifier for high-dimension low-sample-size class-imbalanced problem
The Classification on high-dimension low-sample-size data (HDLSS) is a
challenging problem and it is common to have class-imbalanced data in most
application fields. We term this as Imbalanced HDLSS (IHDLSS). Recent
theoretical results reveal that the classification criterion and tolerance
similarity are crucial to HDLSS, which emphasizes the maximization of
within-class variance on the premise of class separability. Based on this idea,
a novel linear binary classifier, termed Population Structure-learned
Classifier (PSC), is proposed. The proposed PSC can obtain better
generalization performance on IHDLSS by maximizing the sum of inter-class
scatter matrix and intra-class scatter matrix on the premise of class
separability and assigning different intercept values to majority and minority
classes. The salient features of the proposed approach are: (1) It works well
on IHDLSS; (2) The inverse of high dimensional matrix can be solved in low
dimensional space; (3) It is self-adaptive in determining the intercept term
for each class; (4) It has the same computational complexity as the SVM. A
series of evaluations are conducted on one simulated data set and eight
real-world benchmark data sets on IHDLSS on gene analysis. Experimental results
demonstrate that the PSC is superior to the state-of-art methods in IHDLSS.Comment: 41 pages,10 Figures,10 Table
The classification for High-dimension low-sample size data
Huge amount of applications in various fields, such as gene expression
analysis or computer vision, undergo data sets with high-dimensional
low-sample-size (HDLSS), which has putted forward great challenges for standard
statistical and modern machine learning methods. In this paper, we propose a
novel classification criterion on HDLSS, tolerance similarity, which emphasizes
the maximization of within-class variance on the premise of class separability.
According to this criterion, a novel linear binary classifier is designed,
denoted by No-separated Data Maximum Dispersion classifier (NPDMD). The
objective of NPDMD is to find a projecting direction w in which all of training
samples scatter in as large an interval as possible. NPDMD has several
characteristics compared to the state-of-the-art classification methods. First,
it works well on HDLSS. Second, it combines the sample statistical information
and local structural information (supporting vectors) into the objective
function to find the solution of projecting direction in the whole feature
spaces. Third, it solves the inverse of high dimensional matrix in low
dimensional space. Fourth, it is relatively simple to be implemented based on
Quadratic Programming. Fifth, it is robust to the model specification for
various real applications. The theoretical properties of NPDMD are deduced. We
conduct a series of evaluations on one simulated and six real-world benchmark
data sets, including face classification and mRNA classification. NPDMD
outperforms those widely used approaches in most cases, or at least obtains
comparable results.Comment: arXiv admin note: text overlap with arXiv:1901.0137
- …