1,048 research outputs found
Over-Fit: Noisy-Label Detection based on the Overfitted Model Property
Due to the increasing need to handle the noisy label problem in a massive
dataset, learning with noisy labels has received much attention in recent
years. As a promising approach, there have been recent studies to select clean
training data by finding small-loss instances before a deep neural network
overfits the noisy-label data. However, it is challenging to prevent
overfitting. In this paper, we propose a novel noisy-label detection algorithm
by employing the property of overfitting on individual data points. To this
end, we present two novel criteria that statistically measure how much each
training sample abnormally affects the model and clean validation data. Using
the criteria, our iterative algorithm removes noisy-label samples and retrains
the model alternately until no further performance improvement is made. In
experiments on multiple benchmark datasets, we demonstrate the validity of our
algorithm and show that our algorithm outperforms the state-of-the-art methods
when the exact noise rates are not given. Furthermore, we show that our method
can not only be expanded to a real-world video dataset but also can be viewed
as a regularization method to solve problems caused by overfitting.Comment: 10 pages, 7 figure
Backbone Can Not be Trained at Once: Rolling Back to Pre-trained Network for Person Re-Identification
In person re-identification (ReID) task, because of its shortage of trainable
dataset, it is common to utilize fine-tuning method using a classification
network pre-trained on a large dataset. However, it is relatively difficult to
sufficiently fine-tune the low-level layers of the network due to the gradient
vanishing problem. In this work, we propose a novel fine-tuning strategy that
allows low-level layers to be sufficiently trained by rolling back the weights
of high-level layers to their initial pre-trained weights. Our strategy
alleviates the problem of gradient vanishing in low-level layers and robustly
trains the low-level layers to fit the ReID dataset, thereby increasing the
performance of ReID tasks. The improved performance of the proposed strategy is
validated via several experiments. Furthermore, without any add-ons such as
pose estimation or segmentation, our strategy exhibits state-of-the-art
performance using only vanilla deep convolutional neural network architecture.Comment: Accepted to AAAI 201
Class-Attentive Diffusion Network for Semi-Supervised Classification
Recently, graph neural networks for semi-supervised classification have been
widely studied. However, existing methods only use the information of limited
neighbors and do not deal with the inter-class connections in graphs. In this
paper, we propose Adaptive aggregation with Class-Attentive Diffusion (AdaCAD),
a new aggregation scheme that adaptively aggregates nodes probably of the same
class among K-hop neighbors. To this end, we first propose a novel stochastic
process, called Class-Attentive Diffusion (CAD), that strengthens attention to
intra-class nodes and attenuates attention to inter-class nodes. In contrast to
the existing diffusion methods with a transition matrix determined solely by
the graph structure, CAD considers both the node features and the graph
structure with the design of our class-attentive transition matrix that
utilizes a classifier. Then, we further propose an adaptive update scheme that
leverages different reflection ratios of the diffusion result for each node
depending on the local class-context. As the main advantage, AdaCAD alleviates
the problem of undesired mixing of inter-class features caused by discrepancies
between node labels and the graph topology. Built on AdaCAD, we construct a
simple model called Class-Attentive Diffusion Network (CAD-Net). Extensive
experiments on seven benchmark datasets consistently demonstrate the efficacy
of the proposed method and our CAD-Net significantly outperforms the
state-of-the-art methods. Code is available at
https://github.com/ljin0429/CAD-Net.Comment: Accepted to AAAI 202
Electrochemical Investigation of High-Performance Dye-Sensitized Solar Cells Based on Molybdenum for Preparation of Counter Electrode
In order to improve the photocurrent conversion efficiency of dye-sensitized solar cells (DSSCs), we studied an alternative conductor for the counter electrode and focused on molybdenum (Mo) instead of conventional fluorine-doped tin oxide (FTO). Because Mo has a similar work function to FTO for band alignment, better formability of platinum (Pt), and a low electric resistance, using a counter electrode made of Mo instead of FTO lead to the enhancement of the catalytic reaction of the redox couple, reduce the interior resistance of the DSSCs, and prevent energy-barrier formation. Using electrical measurements under a 1-sun condition (100 mW/cm(2), AM 1.5), we determined that the fill factor (FF) and photocurrent conversion efficiency (eta) of DSSCs with a Mo electrode were respectively improved by 7.75% and 5.59% with respect to those of DSSCs with an FTO electrode. Moreover, we have investigated the origin of the improved performance through surface morphology analyses such as scanning electron microscopy and electrochemical analyses including cyclic voltammetry and impedance spectroscopy
A Case of Laparoscopic Radical Prostatectomy for a Prostatic Stromal Tumor of Uncertain Malignant Potential
Prostatic stromal tumor of uncertain malignant potential (STUMP) is a rare neoplasm with distinctive clinical and pathological characteristics. Here we report a case of laparoscopic radical prostatectomy performed in a patient with prostatic STUMP
- …