96 research outputs found
Implicitly Constrained Semi-Supervised Least Squares Classification
We introduce a novel semi-supervised version of the least squares classifier.
This implicitly constrained least squares (ICLS) classifier minimizes the
squared loss on the labeled data among the set of parameters implied by all
possible labelings of the unlabeled data. Unlike other discriminative
semi-supervised methods, our approach does not introduce explicit additional
assumptions into the objective function, but leverages implicit assumptions
already present in the choice of the supervised least squares classifier. We
show this approach can be formulated as a quadratic programming problem and its
solution can be found using a simple gradient descent procedure. We prove that,
in a certain way, our method never leads to performance worse than the
supervised classifier. Experimental results corroborate this theoretical result
in the multidimensional case on benchmark datasets, also in terms of the error
rate.Comment: 12 pages, 2 figures, 1 table. The Fourteenth International Symposium
on Intelligent Data Analysis (2015), Saint-Etienne, Franc
XCS Classifier System with Experience Replay
XCS constitutes the most deeply investigated classifier system today. It
bears strong potentials and comes with inherent capabilities for mastering a
variety of different learning tasks. Besides outstanding successes in various
classification and regression tasks, XCS also proved very effective in certain
multi-step environments from the domain of reinforcement learning. Especially
in the latter domain, recent advances have been mainly driven by algorithms
which model their policies based on deep neural networks -- among which the
Deep-Q-Network (DQN) is a prominent representative. Experience Replay (ER)
constitutes one of the crucial factors for the DQN's successes, since it
facilitates stabilized training of the neural network-based Q-function
approximators. Surprisingly, XCS barely takes advantage of similar mechanisms
that leverage stored raw experiences encountered so far. To bridge this gap,
this paper investigates the benefits of extending XCS with ER. On the one hand,
we demonstrate that for single-step tasks ER bears massive potential for
improvements in terms of sample efficiency. On the shady side, however, we
reveal that the use of ER might further aggravate well-studied issues not yet
solved for XCS when applied to sequential decision problems demanding for
long-action-chains
Improving classification models with context knowledge and variable activation functions
This work proposes two methods to boost the performances of a given classifier: the first one, which works on a Neural Network classifier, is a new type of trainable activation function, that is a function which is adjusted during the learning phase, allowing the network to exploit the data better respect to use a classic activation function with fixed-shape; the second one provides two frameworks to use an external knowledge base to improve the classification results
Recommended from our members
Classification by Neural Network and Statistical Models in Tandem: Does Integration Enhance Performance?
The major purposes of the current research are twofold. The first purpose is to present a composite approach to the general classification problem by using outputs from various parametric statistical procedures and neural networks. The second purpose is to compare several parametric and neural network models on a transportation planning related classification problem and five simulated classification problems
Supervised Learning - An Introduction:Lectures given at the 30th Canary Islands Winter School of Astrophysics
Based on a set of lectures given at the 30th Canary Islands Winter School of Astrophysics: Big Data Analysis in Astronomy, La Laguna, Tenerife, Spain, 11/2018To a large extent, the material is taken from an MSc level course Neural Networks and Computational IntelligenceComputing Science Programme, University of GroningenThese notes present a selection of topics in the area of supervised machine learning. The focus is on the discussion of methods and algorithms for classification tasks. Regression by neural networks is discussedonly very briefly as it is in the center of complementary lectures. The same applies to concepts and methods of unsupervised learning.The selection and presentation of the material is clearly influenced by personal biasses and preferences. Nevertheless, the lectures and notes should provide a useful, albeit incomplete, overview and serve as astarting point for further exploration of the fascinating area of machine learning
- …