27,597 research outputs found
Bounded Coordinate-Descent for Biological Sequence Classification in High Dimensional Predictor Space
We present a framework for discriminative sequence classification where the
learner works directly in the high dimensional predictor space of all
subsequences in the training set. This is possible by employing a new
coordinate-descent algorithm coupled with bounding the magnitude of the
gradient for selecting discriminative subsequences fast. We characterize the
loss functions for which our generic learning algorithm can be applied and
present concrete implementations for logistic regression (binomial
log-likelihood loss) and support vector machines (squared hinge loss).
Application of our algorithm to protein remote homology detection and remote
fold recognition results in performance comparable to that of state-of-the-art
methods (e.g., kernel support vector machines). Unlike state-of-the-art
classifiers, the resulting classification models are simply lists of weighted
discriminative subsequences and can thus be interpreted and related to the
biological problem
A variational model for data fitting on manifolds by minimizing the acceleration of a B\'ezier curve
We derive a variational model to fit a composite B\'ezier curve to a set of
data points on a Riemannian manifold. The resulting curve is obtained in such a
way that its mean squared acceleration is minimal in addition to remaining
close the data points. We approximate the acceleration by discretizing the
squared second order derivative along the curve. We derive a closed-form,
numerically stable and efficient algorithm to compute the gradient of a
B\'ezier curve on manifolds with respect to its control points, expressed as a
concatenation of so-called adjoint Jacobi fields. Several examples illustrate
the capabilites and validity of this approach both for interpolation and
approximation. The examples also illustrate that the approach outperforms
previous works tackling this problem
Interior Point Decoding for Linear Vector Channels
In this paper, a novel decoding algorithm for low-density parity-check (LDPC)
codes based on convex optimization is presented. The decoding algorithm, called
interior point decoding, is designed for linear vector channels. The linear
vector channels include many practically important channels such as inter
symbol interference channels and partial response channels. It is shown that
the maximum likelihood decoding (MLD) rule for a linear vector channel can be
relaxed to a convex optimization problem, which is called a relaxed MLD
problem. The proposed decoding algorithm is based on a numerical optimization
technique so called interior point method with barrier function. Approximate
variations of the gradient descent and the Newton methods are used to solve the
convex optimization problem. In a decoding process of the proposed algorithm, a
search point always lies in the fundamental polytope defined based on a
low-density parity-check matrix. Compared with a convectional joint message
passing decoder, the proposed decoding algorithm achieves better BER
performance with less complexity in the case of partial response channels in
many cases.Comment: 18 pages, 17 figures, The paper has been submitted to IEEE
Transaction on Information Theor
Importance mixing: Improving sample reuse in evolutionary policy search methods
Deep neuroevolution, that is evolutionary policy search methods based on deep
neural networks, have recently emerged as a competitor to deep reinforcement
learning algorithms due to their better parallelization capabilities. However,
these methods still suffer from a far worse sample efficiency. In this paper we
investigate whether a mechanism known as "importance mixing" can significantly
improve their sample efficiency. We provide a didactic presentation of
importance mixing and we explain how it can be extended to reuse more samples.
Then, from an empirical comparison based on a simple benchmark, we show that,
though it actually provides better sample efficiency, it is still far from the
sample efficiency of deep reinforcement learning, though it is more stable
- …