4 research outputs found
Robust Sparse Learning Based on Kernel Non-Second Order Minimization
© 2019 IEEE. Partial occlusions in face images pose a great problem for most face recognition algorithms due to the fact that most of these algorithms mainly focus on solving a second order loss function, e.g., mean square error (MSE), which will magnify the effect from occlusion parts. In this paper, we proposed a kernel non-second order loss function for sparse representation (KNS-SR) to recognize or restore partially occluded facial images, which both take the advantages of the correntropy and the non-second order statistics measurement. The resulted framework is more accurate than the MSE-based ones in locating and eliminating outliers information. Experimental results from image reconstruction and recognition tasks on publicly available databases show that the proposed method achieves better performances compared with existing methods
Broad Learning System Based on Maximum Correntropy Criterion
As an effective and efficient discriminative learning method, Broad Learning
System (BLS) has received increasing attention due to its outstanding
performance in various regression and classification problems. However, the
standard BLS is derived under the minimum mean square error (MMSE) criterion,
which is, of course, not always a good choice due to its sensitivity to
outliers. To enhance the robustness of BLS, we propose in this work to adopt
the maximum correntropy criterion (MCC) to train the output weights, obtaining
a correntropy based broad learning system (C-BLS). Thanks to the inherent
superiorities of MCC, the proposed C-BLS is expected to achieve excellent
robustness to outliers while maintaining the original performance of the
standard BLS in Gaussian or noise-free environment. In addition, three
alternative incremental learning algorithms, derived from a weighted
regularized least-squares solution rather than pseudoinverse formula, for C-BLS
are developed.With the incremental learning algorithms, the system can be
updated quickly without the entire retraining process from the beginning, when
some new samples arrive or the network deems to be expanded. Experiments on
various regression and classification datasets are reported to demonstrate the
desirable performance of the new methods