4 research outputs found
On the Learning Property of Logistic and Softmax Losses for Deep Neural Networks
Deep convolutional neural networks (CNNs) trained with logistic and softmax
losses have made significant advancement in visual recognition tasks in
computer vision. When training data exhibit class imbalances, the class-wise
reweighted version of logistic and softmax losses are often used to boost
performance of the unweighted version. In this paper, motivated to explain the
reweighting mechanism, we explicate the learning property of those two loss
functions by analyzing the necessary condition (e.g., gradient equals to zero)
after training CNNs to converge to a local minimum. The analysis immediately
provides us explanations for understanding (1) quantitative effects of the
class-wise reweighting mechanism: deterministic effectiveness for binary
classification using logistic loss yet indeterministic for multi-class
classification using softmax loss; (2) disadvantage of logistic loss for
single-label multi-class classification via one-vs.-all approach, which is due
to the averaging effect on predicted probabilities for the negative class
(e.g., non-target classes) in the learning process. With the disadvantage and
advantage of logistic loss disentangled, we thereafter propose a novel
reweighted logistic loss for multi-class classification. Our simple yet
effective formulation improves ordinary logistic loss by focusing on learning
hard non-target classes (target vs. non-target class in one-vs.-all) and turned
out to be competitive with softmax loss. We evaluate our method on several
benchmark datasets to demonstrate its effectiveness.Comment: AAAI2020. Previously this appeared as arXiv:1906.04026v2, which was
submitted as a replacement by acciden
Neural Network Used for the Fusion of Predictions Obtained by the K-Nearest Neighbors Algorithm Based on Independent Data Sources
The article concerns the problem of classification based on independent data sets-local decision tables. The aim of the paper is to propose a classification model for dispersed data using a modified k-nearest neighbors algorithm and a neural network. A neural network, more specifically a multilayer perceptron, is used to combine the prediction results obtained based on local tables. Prediction results are stored in the measurement level and generated using a modified k-nearest neighbors algorithm. The task of neural networks is to combine these results and provide a common prediction. In the article various structures of neural networks (different number of neurons in the hidden layer) are studied and the results are compared with the results generated by other fusion methods, such as the majority voting, the Borda count method, the sum rule, the method that is based on decision templates and the method that is based on theory of evidence. Based on the obtained results, it was found that the neural network always generates unambiguous decisions, which is a great advantage as most of the other fusion methods generate ties. Moreover, if only unambiguous results were considered, the use of a neural network gives much better results than other fusion methods. If we allow ambiguity, some fusion methods are slightly better, but it is the result of this fact that it is possible to generate few decisions for the test object