1,438 research outputs found
Uneven illumination surface defects inspection based on convolutional neural network
Surface defect inspection based on machine vision is often affected by uneven
illumination. In order to improve the inspection rate of surface defects
inspection under uneven illumination condition, this paper proposes a method
for detecting surface image defects based on convolutional neural network,
which is based on the adjustment of convolutional neural networks, training
parameters, changing the structure of the network, to achieve the purpose of
accurately identifying various defects. Experimental on defect inspection of
copper strip and steel images shows that the convolutional neural network can
automatically learn features without preprocessing the image, and correct
identification of various types of image defects affected by uneven
illumination, thus overcoming the drawbacks of traditional machine vision
inspection methods under uneven illumination
Online Deep Metric Learning
Metric learning learns a metric function from training data to calculate the
similarity or distance between samples. From the perspective of feature
learning, metric learning essentially learns a new feature space by feature
transformation (e.g., Mahalanobis distance metric). However, traditional metric
learning algorithms are shallow, which just learn one metric space (feature
transformation). Can we further learn a better metric space from the learnt
metric space? In other words, can we learn metric progressively and nonlinearly
like deep learning by just using the existing metric learning algorithms? To
this end, we present a hierarchical metric learning scheme and implement an
online deep metric learning framework, namely ODML. Specifically, we take one
online metric learning algorithm as a metric layer, followed by a nonlinear
layer (i.e., ReLU), and then stack these layers modelled after the deep
learning. The proposed ODML enjoys some nice properties, indeed can learn
metric progressively and performs superiorly on some datasets. Various
experiments with different settings have been conducted to verify these
properties of the proposed ODML.Comment: 9 page
OPML: A One-Pass Closed-Form Solution for Online Metric Learning
To achieve a low computational cost when performing online metric learning
for large-scale data, we present a one-pass closed-form solution namely OPML in
this paper. Typically, the proposed OPML first adopts a one-pass triplet
construction strategy, which aims to use only a very small number of triplets
to approximate the representation ability of whole original triplets obtained
by batch-manner methods. Then, OPML employs a closed-form solution to update
the metric for new coming samples, which leads to a low space (i.e., )
and time (i.e., ) complexity, where is the feature dimensionality.
In addition, an extension of OPML (namely COPML) is further proposed to enhance
the robustness when in real case the first several samples come from the same
class (i.e., cold start problem). In the experiments, we have systematically
evaluated our methods (OPML and COPML) on three typical tasks, including UCI
data classification, face verification, and abnormal event detection in videos,
which aims to fully evaluate the proposed methods on different sample number,
different feature dimensionalities and different feature extraction ways (i.e.,
hand-crafted and deeply-learned). The results show that OPML and COPML can
obtain the promising performance with a very low computational cost. Also, the
effectiveness of COPML under the cold start setting is experimentally verified.Comment: 12 page
Efficient Last-iterate Convergence Algorithms in Solving Games
No-regret algorithms are popular for learning Nash equilibrium (NE) in
two-player zero-sum normal-form games (NFGs) and extensive-form games (EFGs).
Many recent works consider the last-iterate convergence no-regret algorithms.
Among them, the two most famous algorithms are Optimistic Gradient Descent
Ascent (OGDA) and Optimistic Multiplicative Weight Update (OMWU). However, OGDA
has high per-iteration complexity. OMWU exhibits a lower per-iteration
complexity but poorer empirical performance, and its convergence holds only
when NE is unique. Recent works propose a Reward Transformation (RT) framework
for MWU, which removes the uniqueness condition and achieves competitive
performance with OMWU. Unfortunately, RT-based algorithms perform worse than
OGDA under the same number of iterations, and their convergence guarantee is
based on the continuous-time feedback assumption, which does not hold in most
scenarios. To address these issues, we provide a closer analysis of the RT
framework, which holds for both continuous and discrete-time feedback. We
demonstrate that the essence of the RT framework is to transform the problem of
learning NE in the original game into a series of strongly convex-concave
optimization problems (SCCPs). We show that the bottleneck of RT-based
algorithms is the speed of solving SCCPs. To improve the their empirical
performance, we design a novel transformation method to enable the SCCPs can be
solved by Regret Matching+ (RM+), a no-regret algorithm with better empirical
performance, resulting in Reward Transformation RM+ (RTRM+). RTRM+ enjoys
last-iterate convergence under the discrete-time feedback setting. Using the
counterfactual regret decomposition framework, we propose Reward Transformation
CFR+ (RTCFR+) to extend RTRM+ to EFGs. Experimental results show that our
algorithms significantly outperform existing last-iterate convergence
algorithms and RM+ (CFR+)
- …