284 research outputs found
Semantic Autoencoder for Zero-Shot Learning
Existing zero-shot learning (ZSL) models typically learn a projection
function from a feature space to a semantic embedding space (e.g.~attribute
space). However, such a projection function is only concerned with predicting
the training seen class semantic representation (e.g.~attribute prediction) or
classification. When applied to test data, which in the context of ZSL contains
different (unseen) classes without training data, a ZSL model typically suffers
from the project domain shift problem. In this work, we present a novel
solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the
encoder-decoder paradigm, an encoder aims to project a visual feature vector
into the semantic space as in the existing ZSL models. However, the decoder
exerts an additional constraint, that is, the projection/code must be able to
reconstruct the original visual feature. We show that with this additional
reconstruction constraint, the learned projection function from the seen
classes is able to generalise better to the new unseen classes. Importantly,
the encoder and decoder are linear and symmetric which enable us to develop an
extremely efficient learning algorithm. Extensive experiments on six benchmark
datasets demonstrate that the proposed SAE outperforms significantly the
existing ZSL models with the additional benefit of lower computational cost.
Furthermore, when the SAE is applied to supervised clustering problem, it also
beats the state-of-the-art.Comment: accepted to CVPR201
Ranked List Loss for Deep Metric Learning
The objective of deep metric learning (DML) is to learn embeddings that can
capture semantic similarity and dissimilarity information among data points.
Existing pairwise or tripletwise loss functions used in DML are known to suffer
from slow convergence due to a large proportion of trivial pairs or triplets as
the model improves. To improve this, ranking-motivated structured losses are
proposed recently to incorporate multiple examples and exploit the structured
information among them. They converge faster and achieve state-of-the-art
performance. In this work, we unveil two limitations of existing
ranking-motivated structured losses and propose a novel ranked list loss to
solve both of them. First, given a query, only a fraction of data points is
incorporated to build the similarity structure. Consequently, some useful
examples are ignored and the structure is less informative. To address this, we
propose to build a set-based similarity structure by exploiting all instances
in the gallery. The learning setting can be interpreted as few-shot retrieval:
given a mini-batch, every example is iteratively used as a query, and the rest
ones compose the gallery to search, i.e., the support set in few-shot setting.
The rest examples are split into a positive set and a negative set. For every
mini-batch, the learning objective of ranked list loss is to make the query
closer to the positive set than to the negative set by a margin. Second,
previous methods aim to pull positive pairs as close as possible in the
embedding space. As a result, the intraclass data distribution tends to be
extremely compressed. In contrast, we propose to learn a hypersphere for each
class in order to preserve useful similarity structure inside it, which
functions as regularisation. Extensive experiments demonstrate the superiority
of our proposal by comparing with the state-of-the-art methods.Comment: Accepted to T-PAMI. Therefore, to read the offical version, please go
to IEEE Xplore. Fine-grained image retrieval task. Our source code is
available online: https://github.com/XinshaoAmosWang/Ranked-List-Loss-for-DM
Cytokines and Growth Factors in Acute Coronary Syndrome
The authors studied immunological parameters in acute coronary syndrome, vascular endothelial growth factor (VEGF -1) and cytokines (IL-1ß and IL-10), acute phase proteins (CRP) and complement C3. As a result of the study, the possibility of predicting the outcomes of acute coronary syndrome is presented, since the determination of protein growth factors (CRP, C3 and VEGF) in dynamics makes it possible to prevent the development of acute myocardial infarction
IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude's Variance Matters
In this work, we study robust deep learning against abnormal training data
from the perspective of example weighting built in empirical loss functions,
i.e., gradient magnitude with respect to logits, an angle that is not
thoroughly studied so far. Consequently, we have two key findings: (1) Mean
Absolute Error (MAE) Does Not Treat Examples Equally. We present new
observations and insightful analysis about MAE, which is theoretically proved
to be noise-robust. First, we reveal its underfitting problem in practice.
Second, we analyse that MAE's noise-robustness is from emphasising on uncertain
examples instead of treating training samples equally, as claimed in prior
work. (2) The Variance of Gradient Magnitude Matters. We propose an effective
and simple solution to enhance MAE's fitting ability while preserving its
noise-robustness. Without changing MAE's overall weighting scheme, i.e., what
examples get higher weights, we simply change its weighting variance
non-linearly so that the impact ratio between two examples are adjusted. Our
solution is termed Improved MAE (IMAE). We prove IMAE's effectiveness using
extensive experiments: image classification under clean labels, synthetic label
noise, and real-world unknown noise. We conclude IMAE is superior to CCE, the
most popular loss for training DNNs.Comment: Updated Version. IMAE for Noise-Robust Learning: Mean Absolute Error
Does Not Treat Examples Equally and Gradient Magnitude's Variance Matters
Code:
\url{https://github.com/XinshaoAmosWang/Improving-Mean-Absolute-Error-against-CCE}.
Please feel free to contact for discussions or implementation problem
Theoretical Foundations of the Process of Steaming Cocoons in a Vacuum Device
This article highlights the importance of the cocoon steaming process for producing quality raw silk. And also provides information about the advantage of vacuum cocoon steamer. Since the vacuum pressure, temperature and volume of water in the evaporation chamber are constant, isobaric, isothermal and isochoric processes of thermodynamics are also studied. Information on absorption and adsorption processes associated with the entry of liquid into the cocoon is also provided
- …