14,958 research outputs found
Kernel methods in machine learning
We review machine learning methods employing positive definite kernels. These
methods formulate learning and estimation problems in a reproducing kernel
Hilbert space (RKHS) of functions defined on the data domain, expanded in terms
of a kernel. Working in linear spaces of function has the benefit of
facilitating the construction and analysis of learning algorithms while at the
same time allowing large classes of functions. The latter include nonlinear
functions as well as functions defined on nonvectorial data. We cover a wide
range of methods, ranging from binary classifiers to sophisticated methods for
estimation with structured data.Comment: Published in at http://dx.doi.org/10.1214/009053607000000677 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Distributed Detection and Estimation in Wireless Sensor Networks
In this article we consider the problems of distributed detection and
estimation in wireless sensor networks. In the first part, we provide a general
framework aimed to show how an efficient design of a sensor network requires a
joint organization of in-network processing and communication. Then, we recall
the basic features of consensus algorithm, which is a basic tool to reach
globally optimal decisions through a distributed approach. The main part of the
paper starts addressing the distributed estimation problem. We show first an
entirely decentralized approach, where observations and estimations are
performed without the intervention of a fusion center. Then, we consider the
case where the estimation is performed at a fusion center, showing how to
allocate quantization bits and transmit powers in the links between the nodes
and the fusion center, in order to accommodate the requirement on the maximum
estimation variance, under a constraint on the global transmit power. We extend
the approach to the detection problem. Also in this case, we consider the
distributed approach, where every node can achieve a globally optimal decision,
and the case where the decision is taken at a central node. In the latter case,
we show how to allocate coding bits and transmit power in order to maximize the
detection probability, under constraints on the false alarm rate and the global
transmit power. Then, we generalize consensus algorithms illustrating a
distributed procedure that converges to the projection of the observation
vector onto a signal subspace. We then address the issue of energy consumption
in sensor networks, thus showing how to optimize the network topology in order
to minimize the energy necessary to achieve a global consensus. Finally, we
address the problem of matching the topology of the network to the graph
describing the statistical dependencies among the observed variables.Comment: 92 pages, 24 figures. To appear in E-Reference Signal Processing, R.
Chellapa and S. Theodoridis, Eds., Elsevier, 201
F-measure Maximization in Multi-Label Classification with Conditionally Independent Label Subsets
We discuss a method to improve the exact F-measure maximization algorithm
called GFM, proposed in (Dembczynski et al. 2011) for multi-label
classification, assuming the label set can be can partitioned into
conditionally independent subsets given the input features. If the labels were
all independent, the estimation of only parameters ( denoting the number
of labels) would suffice to derive Bayes-optimal predictions in
operations. In the general case, parameters are required by GFM, to
solve the problem in operations. In this work, we show that the number
of parameters can be reduced further to , in the best case, assuming the
label set can be partitioned into conditionally independent subsets. As
this label partition needs to be estimated from the data beforehand, we use
first the procedure proposed in (Gasse et al. 2015) that finds such partition
and then infer the required parameters locally in each label subset. The latter
are aggregated and serve as input to GFM to form the Bayes-optimal prediction.
We show on a synthetic experiment that the reduction in the number of
parameters brings about significant benefits in terms of performance
- …