9,726 research outputs found
Cascading Randomized Weighted Majority: A New Online Ensemble Learning Algorithm
With the increasing volume of data in the world, the best approach for
learning from this data is to exploit an online learning algorithm. Online
ensemble methods are online algorithms which take advantage of an ensemble of
classifiers to predict labels of data. Prediction with expert advice is a
well-studied problem in the online ensemble learning literature. The Weighted
Majority algorithm and the randomized weighted majority (RWM) are the most
well-known solutions to this problem, aiming to converge to the best expert.
Since among some expert, the best one does not necessarily have the minimum
error in all regions of data space, defining specific regions and converging to
the best expert in each of these regions will lead to a better result. In this
paper, we aim to resolve this defect of RWM algorithms by proposing a novel
online ensemble algorithm to the problem of prediction with expert advice. We
propose a cascading version of RWM to achieve not only better experimental
results but also a better error bound for sufficiently large datasets.Comment: 15 pages, 3 figure
Multi-Instance Multi-Label Learning
In this paper, we propose the MIML (Multi-Instance Multi-Label learning)
framework where an example is described by multiple instances and associated
with multiple class labels. Compared to traditional learning frameworks, the
MIML framework is more convenient and natural for representing complicated
objects which have multiple semantic meanings. To learn from MIML examples, we
propose the MimlBoost and MimlSvm algorithms based on a simple degeneration
strategy, and experiments show that solving problems involving complicated
objects with multiple semantic meanings in the MIML framework can lead to good
performance. Considering that the degeneration process may lose information, we
propose the D-MimlSvm algorithm which tackles MIML problems directly in a
regularization framework. Moreover, we show that even when we do not have
access to the real objects and thus cannot capture more information from real
objects by using the MIML representation, MIML is still useful. We propose the
InsDif and SubCod algorithms. InsDif works by transforming single-instances
into the MIML representation for learning, while SubCod works by transforming
single-label examples into the MIML representation for learning. Experiments
show that in some tasks they are able to achieve better performance than
learning the single-instances or single-label examples directly.Comment: 64 pages, 10 figures; Artificial Intelligence, 201
SybilBelief: A Semi-supervised Learning Approach for Structure-based Sybil Detection
Sybil attacks are a fundamental threat to the security of distributed
systems. Recently, there has been a growing interest in leveraging social
networks to mitigate Sybil attacks. However, the existing approaches suffer
from one or more drawbacks, including bootstrapping from either only known
benign or known Sybil nodes, failing to tolerate noise in their prior knowledge
about known benign or Sybil nodes, and being not scalable.
In this work, we aim to overcome these drawbacks. Towards this goal, we
introduce SybilBelief, a semi-supervised learning framework, to detect Sybil
nodes. SybilBelief takes a social network of the nodes in the system, a small
set of known benign nodes, and, optionally, a small set of known Sybils as
input. Then SybilBelief propagates the label information from the known benign
and/or Sybil nodes to the remaining nodes in the system.
We evaluate SybilBelief using both synthetic and real world social network
topologies. We show that SybilBelief is able to accurately identify Sybil nodes
with low false positive rates and low false negative rates. SybilBelief is
resilient to noise in our prior knowledge about known benign and Sybil nodes.
Moreover, SybilBelief performs orders of magnitudes better than existing Sybil
classification mechanisms and significantly better than existing Sybil ranking
mechanisms.Comment: 12 page
- …