2,935 research outputs found
Kernelizing LSPE λ
We propose the use of kernel-based methods as underlying function approximator in the least-squares based policy evaluation framework of LSPE(λ) and LSTD(λ). In particular we present the ‘kernelization’ of model-free LSPE(λ). The ‘kernelization’ is computationally made possible by using the subset of regressors approximation, which approximates the kernel using a vastly reduced number of basis functions. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of the relevant basis functions. The LSPE method is well-suited for optimistic policy iteration and can thus be used in the context of online reinforcement learning. We use the high-dimensional Octopus benchmark to demonstrate this
On orthogonal projections for dimension reduction and applications in augmented target loss functions for learning problems
The use of orthogonal projections on high-dimensional input and target data
in learning frameworks is studied. First, we investigate the relations between
two standard objectives in dimension reduction, preservation of variance and of
pairwise relative distances. Investigations of their asymptotic correlation as
well as numerical experiments show that a projection does usually not satisfy
both objectives at once. In a standard classification problem we determine
projections on the input data that balance the objectives and compare
subsequent results. Next, we extend our application of orthogonal projections
to deep learning tasks and introduce a general framework of augmented target
loss functions. These loss functions integrate additional information via
transformations and projections of the target data. In two supervised learning
problems, clinical image segmentation and music information classification, the
application of our proposed augmented target loss functions increase the
accuracy
- …