19 research outputs found
Consistent Multitask Learning with Nonlinear Output Relations
Key to multitask learning is exploiting relationships between different tasks
to improve prediction performance. If the relations are linear, regularization
approaches can be used successfully. However, in practice assuming the tasks to
be linearly related might be restrictive, and allowing for nonlinear structures
is a challenge. In this paper, we tackle this issue by casting the problem
within the framework of structured prediction. Our main contribution is a novel
algorithm for learning multiple tasks which are related by a system of
nonlinear equations that their joint outputs need to satisfy. We show that the
algorithm is consistent and can be efficiently implemented. Experimental
results show the potential of the proposed method.Comment: 25 pages, 1 figure, 2 table
Learning Competitive and Discriminative Reconstructions for Anomaly Detection
Most of the existing methods for anomaly detection use only positive data to
learn the data distribution, thus they usually need a pre-defined threshold at
the detection stage to determine whether a test instance is an outlier.
Unfortunately, a good threshold is vital for the performance and it is really
hard to find an optimal one. In this paper, we take the discriminative
information implied in unlabeled data into consideration and propose a new
method for anomaly detection that can learn the labels of unlabelled data
directly. Our proposed method has an end-to-end architecture with one encoder
and two decoders that are trained to model inliers and outliers' data
distributions in a competitive way. This architecture works in a discriminative
manner without suffering from overfitting, and the training algorithm of our
model is adopted from SGD, thus it is efficient and scalable even for
large-scale datasets. Empirical studies on 7 datasets including KDD99, MNIST,
Caltech-256, and ImageNet etc. show that our model outperforms the
state-of-the-art methods.Comment: 8 page