498 research outputs found
Controlling Class Layout for Deep Ordinal Classification via Constrained Proxies Learning
For deep ordinal classification, learning a well-structured feature space
specific to ordinal classification is helpful to properly capture the ordinal
nature among classes. Intuitively, when Euclidean distance metric is used, an
ideal ordinal layout in feature space would be that the sample clusters are
arranged in class order along a straight line in space. However, enforcing
samples to conform to a specific layout in the feature space is a challenging
problem. To address this problem, in this paper, we propose a novel Constrained
Proxies Learning (CPL) method, which can learn a proxy for each ordinal class
and then adjusts the global layout of classes by constraining these proxies.
Specifically, we propose two kinds of strategies: hard layout constraint and
soft layout constraint. The hard layout constraint is realized by directly
controlling the generation of proxies to force them to be placed in a strict
linear layout or semicircular layout (i.e., two instantiations of strict
ordinal layout). The soft layout constraint is realized by constraining that
the proxy layout should always produce unimodal proxy-to-proxies similarity
distribution for each proxy (i.e., to be a relaxed ordinal layout). Experiments
show that the proposed CPL method outperforms previous deep ordinal
classification methods under the same setting of feature extractor.Comment: Accepted by AAAI 202
Neural network for ordinal classification of imbalanced data by minimizing a Bayesian cost
Ordinal classification of imbalanced data is a challenging problem that appears in many real world applications. The challenge is to simultaneously consider the order of the classes and the class imbalance, which can notably improve the performance metrics. The Bayesian formulation allows to deal with these two characteristics jointly: It takes into account the prior probability of each class and the decision costs, which can be used to include the imbalance and the ordinal information, respectively. We propose to use the Bayesian formulation to train neural networks, which have shown excellent results in many classification tasks. A loss function is proposed to train networks with a single neuron in the output layer and a threshold based decision rule. The loss is an estimate of the Bayesian classification cost, based on the Parzen windows estimator, which is fitted for a thresholded decision. Experiments with several real datasets show that the proposed method provides competitive results in different scenarios, due to its high flexibility to specify the relative importance of the errors in the classification of patterns of different classes, considering the order and independently of the probability of each class.This work was partially supported by Spanish Ministry of Science and Innovation through Thematic Network "MAPAS"(TIN2017-90567-REDT) and by BBVA Foundation through "2-BARBAS" research grant. Funding for APC: Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2023)
- …