421 research outputs found
Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery
PCA is one of the most widely used dimension reduction techniques. A related
easier problem is "subspace learning" or "subspace estimation". Given
relatively clean data, both are easily solved via singular value decomposition
(SVD). The problem of subspace learning or PCA in the presence of outliers is
called robust subspace learning or robust PCA (RPCA). For long data sequences,
if one tries to use a single lower dimensional subspace to represent the data,
the required subspace dimension may end up being quite large. For such data, a
better model is to assume that it lies in a low-dimensional subspace that can
change over time, albeit gradually. The problem of tracking such data (and the
subspaces) while being robust to outliers is called robust subspace tracking
(RST). This article provides a magazine-style overview of the entire field of
robust subspace learning and tracking. In particular solutions for three
problems are discussed in detail: RPCA via sparse+low-rank matrix decomposition
(S+LR), RST via S+LR, and "robust subspace recovery (RSR)". RSR assumes that an
entire data vector is either an outlier or an inlier. The S+LR formulation
instead assumes that outliers occur on only a few data vector indices and hence
are well modeled as sparse corruptions.Comment: To appear, IEEE Signal Processing Magazine, July 201
Optimization algorithms for decision tree induction
Aufgrund der guten Interpretierbarkeit gehören Entscheidungsbäume zu den am häufigsten verwendeten Modellen des maschinellen Lernens zur Lösung von Klassifizierungs- und Regressionsaufgaben. Ihre Vorhersagen sind oft jedoch nicht so genau wie die anderer Modelle.
Der am weitesten verbreitete Ansatz zum Lernen von Entscheidungsbäumen ist die
Top-Down-Methode, bei der rekursiv neue Aufteilungen anhand eines einzelnen Merkmals eingefuhrt werden, die ein bestimmtes Aufteilungskriterium minimieren. Eine Möglichkeit diese Strategie zu verbessern und kleinere und genauere Entscheidungsbäume
zu erzeugen, besteht darin, andere Arten von Aufteilungen zuzulassen, z.B. welche, die
mehrere Merkmale gleichzeitig berücksichtigen. Solche zu bestimmen ist allerdings deutlich komplexer und es sind effektive Optimierungsalgorithmen notwendig um optimale
Lösungen zu finden.
Für numerische Merkmale sind Aufteilungen anhand affiner Hyperebenen eine Alternative zu univariaten Aufteilungen. Leider ist das Problem der optimalen Bestimmung der Hyperebenparameter im Allgemeinen NP-schwer. Inspiriert durch die zugrunde liegende Problemstruktur werden in dieser Arbeit daher zwei Heuristiken zur
näherungsweisen Lösung dieses Problems entwickelt. Die erste ist eine Kreuzentropiemethode, die iterativ Stichproben von der von-Mises-Fisher-Verteilung zieht und deren
Parameter mithilfe der besten Elemente daraus verbessert. Die zweite ist ein Simulated-Annealing-Verfahren, das eine Pivotstrategie zur Erkundung des Lösungsraums nutzt.
Aufgrund der gleichzeitigen Verwendung aller numerischen Merkmale sind generelle
Hyperebenenaufteilungen jedoch schwer zu interpretieren. Als Alternative wird in dieser
Arbeit daher die Verwendung von bivariaten Hyperebenenaufteilungen vorgeschlagen,
die Linien in dem von zwei Merkmalen aufgespannten Unterraum entsprechen. Mit diesen ist es möglich, den Merkmalsraum deutlich effizienter zu unterteilen als mit univariaten Aufteilungen. Gleichzeitig sind sie aufgrund der Beschränkung auf zwei Merkmale
gut interpretierbar. Zur optimalen Bestimmung der bivariaten Hyperebenenaufteilungen
wird ein Branch-and-Bound-Verfahren vorgestellt.
Darüber hinaus wird ein Branch-and-Bound-Verfahren zur Bestimmung optimaler
Kreuzaufteilungen entwickelt. Diese können als Kombination von zwei standardmäßigen
univariaten Aufteilung betrachtet werden und sind in Situationen nützlich, in denen die
Datenpunkte nur schlecht durch einzelne lineare Aufteilungen separiert werden können.
Die entwickelten unteren Schranken für verunreinigungsbasierte Aufteilungskriterien motivieren ebenfalls ein einfaches, aber effektives Branch-and-Bound-Verfahren zur
Bestimmung optimaler Aufteilungen nominaler Merkmale. Aufgrund der Komplexität
des zugrunde liegenden Optimierungsproblems musste man bisher nominale Merkmale
mittels Kodierungsschemata in numerische umwandeln oder Heuristiken nutzen, um suboptimale nominale Aufteilungen zu bestimmen. Das vorgeschlagene Branch-and-Bound-Verfahren bietet eine nützliche Alternative für viele praktische Anwendungsfälle.
Schließlich wird ein genetischer Algorithmus zur Induktion von Entscheidungsbäumen
als Alternative zur Top-Down-Methode vorgestellt.Decision trees are among the most commonly used machine learning models for solving
classification and regression tasks due to their major advantage of being easy to interpret.
However, their predictions are often not as accurate as those of other models.
The most widely used approach for learning decision trees is to build them in a top-down manner by introducing splits on a single variable that minimize a certain splitting
criterion. One possibility of improving this strategy to induce smaller and more accurate
decision trees is to allow different types of splits which, for example, consider multiple
features simultaneously. However, finding such splits is usually much more complex and
effective optimization methods are needed to determine optimal solutions.
An alternative to univarate splits for numerical features are oblique splits which
employ affine hyperplanes to divide the feature space. Unfortunately, the problem of
determining such a split optimally is known to be NP-hard in general. Inspired by the
underlying problem structure, two new heuristics are developed for finding near-optimal
oblique splits. The first one is a cross-entropy optimization method which iteratively
samples points from the von Mises-Fisher distribution and updates its parameters based
on the best performing samples. The second one is a simulated annealing algorithm that
uses a pivoting strategy to explore the solution space.
As general oblique splits employ all of the numerical features simultaneously, they are
hard to interpret. As an alternative, in this thesis, the usage of bivariate oblique splits
is proposed. These splits correspond to lines in the subspace spanned by two features.
They are capable of dividing the feature space much more efficiently than univariate
splits while also being fairly interpretable due to the restriction to two features only.
A branch and bound method is presented to determine these bivariate oblique splits
optimally.
Furthermore, a branch and bound method to determine optimal cross-splits is presented. These splits can be viewed as combinations of two standard univariate splits
on numeric attributes and they are useful in situations where the data points cannot
be separated well linearly. The cross-splits can either be introduced directly to induce
quaternary decision trees or, which is usually better, they can be used to provide a
certain degree of foresight, in which case only the better of the two respective univariate
splits is introduced.
The developed lower bounds for impurity based splitting criteria also motivate a
simple but effective branch and bound algorithm for splits on nominal features. Due to
the complexity of determining such splits optimally when the number of possible values
for the feature is large, one previously had to use encoding schemes to transform the
nominal features into numerical ones or rely on heuristics to find near-optimal nominal
splits. The proposed branch and bound method may be a viable alternative for many
practical applications.
Lastly, a genetic algorithm is proposed as an alternative to the top-down induction
strategy
Non-convex Optimization for Machine Learning
A vast majority of machine learning algorithms train their models and perform
inference by solving optimization problems. In order to capture the learning
and prediction problems accurately, structural constraints such as sparsity or
low rank are frequently imposed or else the objective itself is designed to be
a non-convex function. This is especially true of algorithms that operate in
high-dimensional spaces or that train non-linear models such as tensor models
and deep networks.
The freedom to express the learning problem as a non-convex optimization
problem gives immense modeling power to the algorithm designer, but often such
problems are NP-hard to solve. A popular workaround to this has been to relax
non-convex problems to convex ones and use traditional methods to solve the
(convex) relaxed optimization problems. However this approach may be lossy and
nevertheless presents significant challenges for large scale optimization.
On the other hand, direct approaches to non-convex optimization have met with
resounding success in several domains and remain the methods of choice for the
practitioner, as they frequently outperform relaxation-based techniques -
popular heuristics include projected gradient descent and alternating
minimization. However, these are often poorly understood in terms of their
convergence and other properties.
This monograph presents a selection of recent advances that bridge a
long-standing gap in our understanding of these heuristics. The monograph will
lead the reader through several widely used non-convex optimization techniques,
as well as applications thereof. The goal of this monograph is to both,
introduce the rich literature in this area, as well as equip the reader with
the tools and techniques needed to analyze these simple procedures for
non-convex problems.Comment: The official publication is available from now publishers via
http://dx.doi.org/10.1561/220000005
- …