183 research outputs found
Unlabeled sample compression schemes and corner peelings for ample and maximum classes
We examine connections between combinatorial notions that arise in machine
learning and topological notions in cubical/simplicial geometry. These
connections enable to export results from geometry to machine learning.
Our first main result is based on a geometric construction by Tracy Hall
(2004) of a partial shelling of the cross-polytope which can not be extended.
We use it to derive a maximum class of VC dimension 3 that has no corners. This
refutes several previous works in machine learning from the past 11 years. In
particular, it implies that all previous constructions of optimal unlabeled
sample compression schemes for maximum classes are erroneous.
On the positive side we present a new construction of an unlabeled sample
compression scheme for maximum classes. We leave as open whether our unlabeled
sample compression scheme extends to ample (a.k.a. lopsided or extremal)
classes, which represent a natural and far-reaching generalization of maximum
classes. Towards resolving this question, we provide a geometric
characterization in terms of unique sink orientations of the 1-skeletons of
associated cubical complexes
Unlabeled Sample Compression Schemes and Corner Peelings for Ample and Maximum Classes
We examine connections between combinatorial notions that arise in machine learning and topological notions in cubical/simplicial geometry. These connections enable to export results from geometry to machine learning. Our first main result is based on a geometric construction by H. Tracy Hall (2004) of a partial shelling of the cross-polytope which can not be extended. We use it to derive a maximum class of VC dimension 3 that has no corners. This refutes several previous works in machine learning from the past 11 years. In particular, it implies that the previous constructions of optimal unlabeled compression schemes for maximum classes are erroneous.
On the positive side we present a new construction of an optimal unlabeled compression scheme for maximum classes. We leave as open whether our unlabeled compression scheme extends to ample (a.k.a. lopsided or extremal) classes, which represent a natural and far-reaching generalization of maximum classes. Towards resolving this question, we provide a geometric characterization in terms of unique sink orientations of the 1-skeletons of associated cubical complexes
Sign rank versus VC dimension
This work studies the maximum possible sign rank of sign
matrices with a given VC dimension . For , this maximum is {three}. For
, this maximum is . For , similar but
slightly less accurate statements hold. {The lower bounds improve over previous
ones by Ben-David et al., and the upper bounds are novel.}
The lower bounds are obtained by probabilistic constructions, using a theorem
of Warren in real algebraic topology. The upper bounds are obtained using a
result of Welzl about spanning trees with low stabbing number, and using the
moment curve.
The upper bound technique is also used to: (i) provide estimates on the
number of classes of a given VC dimension, and the number of maximum classes of
a given VC dimension -- answering a question of Frankl from '89, and (ii)
design an efficient algorithm that provides an multiplicative
approximation for the sign rank.
We also observe a general connection between sign rank and spectral gaps
which is based on Forster's argument. Consider the adjacency
matrix of a regular graph with a second eigenvalue of absolute value
and . We show that the sign rank of the signed
version of this matrix is at least . We use this connection to
prove the existence of a maximum class with VC
dimension and sign rank . This answers a question
of Ben-David et al.~regarding the sign rank of large VC classes. We also
describe limitations of this approach, in the spirit of the Alon-Boppana
theorem.
We further describe connections to communication complexity, geometry,
learning theory, and combinatorics.Comment: 33 pages. This is a revised version of the paper "Sign rank versus VC
dimension". Additional results in this version: (i) Estimates on the number
of maximum VC classes (answering a question of Frankl from '89). (ii)
Estimates on the sign rank of large VC classes (answering a question of
Ben-David et al. from '03). (iii) A discussion on the computational
complexity of computing the sign-ran
Sample Compression Schemes for Balls in Graphs
One of the open problems in machine learning is whether any set-family of VC-dimension d admits
a sample compression scheme of size O(d). In this paper, we study this problem for balls in graphs.
For balls of arbitrary radius r, we design proper sample compression schemes of size 4 for interval
graphs, of size 6 for trees of cycles, and of size 22 for cube-free median graphs. We also design
approximate sample compression schemes of size 2 for balls of δ-hyperbolic graphs
Non-Clashing Teaching Maps for Balls in Graphs
Recently, Kirkpatrick et al. [ALT 2019] and Fallat et al. [JMLR 2023]
introduced non-clashing teaching and showed it to be the most efficient machine
teaching model satisfying the benchmark for collusion-avoidance set by Goldman
and Mathias. A teaching map for a concept class assigns a
(teaching) set of examples to each concept . A teaching
map is non-clashing if no pair of concepts are consistent with the union of
their teaching sets. The size of a non-clashing teaching map (NCTM) is the
maximum size of a , . The non-clashing teaching dimension
NCTD of is the minimum size of an NCTM for .
NCTM and NCTD are defined analogously, except the teacher may
only use positive examples.
We study NCTMs and NCTMs for the concept class
consisting of all balls of a graph . We show that the associated decision
problem {\sc B-NCTD} for NCTD is NP-complete in split, co-bipartite,
and bipartite graphs. Surprisingly, we even prove that, unless the ETH fails,
{\sc B-NCTD} does not admit an algorithm running in time
, nor a kernelization algorithm outputting a
kernel with vertices, where vc is the vertex cover number of .
These are extremely rare results: it is only the second (fourth, resp.) problem
in NP to admit a double-exponential lower bound parameterized by vc (treewidth,
resp.), and only one of very few problems to admit an ETH-based conditional
lower bound on the number of vertices in a kernel. We complement these lower
bounds with matching upper bounds. For trees, interval graphs, cycles, and
trees of cycles, we derive NCTMs or NCTMs for of size
proportional to its VC-dimension. For Gromov-hyperbolic graphs, we design an
approximate NCTM for of size 2.Comment: Shortened abstract due to character limi
Learning with non-Standard Supervision
Machine learning has enjoyed astounding practical
success in a wide range of applications in recent
years-practical success that often hurries ahead of our
theoretical understanding. The standard framework for machine
learning theory assumes full supervision, that is, training data
consists of correctly labeled iid examples from the same task
that the learned classifier is supposed to be applied to.
However, many practical applications successfully make use of
the sheer abundance of data that is currently produced. Such
data may not be labeled or may be collected from various
sources.
The focus of this thesis is to provide theoretical analysis of
machine learning regimes where the learner is given such
(possibly large amounts) of non-perfect training data. In
particular, we investigate the benefits and limitations of
learning with unlabeled data in semi-supervised learning and
active learning as well as benefits and limitations of learning
from data that has been generated by a task that is different
from the target task (domain adaptation learning).
For all three settings, we propose
Probabilistic Lipschitzness to model the relatedness between the labels and the underlying domain space, and we
discuss our suggested notion by comparing it to other common
data assumptions
Apprentissage supervisés sous contraintes
As supervised learning occupies a larger and larger place in our everyday life, it is met with more and more constrained settings. Dealing with those constraints is a key to fostering new progress in the field, expanding ever further the limit of machine learning---a likely necessary step to reach artificial general intelligence.
Supervised learning is an inductive paradigm in which time and data are refined into knowledge, in the form of predictive models. Models which can sometimes be, it must be conceded, opaque, memory demanding and energy consuming. Given this setting, a constraint can mean any number of things. Essentially, a constraint is anything that stand in the way of supervised learning, be it the lack of time, of memory, of data, or of understanding.
Additionally, the scope of applicability of supervised learning is so vast it can appear daunting. Usefulness can be found in areas including medical analysis and autonomous driving---areas for which strong guarantees are required.
All those constraints (time, memory, data, interpretability, reliability) might somewhat conflict with the traditional goal of supervised learning. In such a case, finding a balance between the constraints and the standard objective is problem-dependent, thus requiring generic solutions. Alternatively, concerns might arise after learning, in which case solutions must be developed under sub-optimal conditions, resulting in constraints adding up. An example of such situations is trying to enforce reliability once the data is no longer available.
After detailing the background (what is supervised learning and why is it difficult, what algorithms will be used, where does it land in the broader scope of knowledge) in which this thesis integrates itself, we will discuss four different scenarios.
The first one is about trying to learn a good decision forest model of a limited size, without learning first a large model and then compressing it. For that, we have developed the Globally Induced Forest (GIF) algorithm, which mixes local and global optimizations to produce accurate predictions under memory constraints in reasonable time. More specifically, the global part allows to sidestep the redundancy inherent in traditional decision forests.
It is shown that the proposed method is more than competitive with standard tree-based ensembles under corresponding constraints, and can sometimes even surpass much larger models.
The second scenario corresponds to the example given above: trying to enforce reliability without data. More specifically, the focus in on out-of-distribution (OOD) detection: recognizing samples which do not come from the original distribution the model was learned from. Tackling this problem with utter lack of data is challenging. Our investigation focuses on image classification with convolutional neural networks. Indicators which can be computed alongside the prediction with little additional cost are proposed. These indicators prove useful, stable and complementary for OOD detection. We also introduce a surprisingly simple, yet effective summary indicator, shown to perform well across several networks and datasets. It can easily be tuned further as soon as samples become available. Overall, interesting results can be reached in all but the most severe settings, for which it was a priori doubtful to come up with a data-free solution.
The third scenario relates to transferring the knowledge of a large model in a smaller one in the absence of data. To do so, we propose to leverage a collection of unlabeled data which are easy to come up with in domains such as image classification. Two schemes are proposed (and then analyzed) to provide optimal transfer. Firstly, we proposed a biasing mechanism in the choice of unlabeled data to use so that the focus is on the more relevant samples. Secondly, we designed a teaching mechanism, applicable for almost all pairs of large and small networks, which allows for a much better knowledge transfer between the networks. Overall, good results are obtainable in decent time provided the collection of data actually contains relevant samples.
The fourth scenario tackles the problem of interpretability: what knowledge can be gleaned more or less indirectly from data. We discuss two subproblems. The first one is to showcase that GIFs (cf. supra) can be used to derive intrinsically interpretable models. The second consists in a comparative study between methods and types of models (namely decision forests and neural networks) for the specific purpose of quantifying how much each variable is important in a given problem. After a preliminary study on benchmark datasets, the analysis turns to a concrete biological problem: inferring gene regulatory network from data. An ambivalent conclusion is reached: neural networks can be made to perform better than decision forests at predicting in almost all instances but struggle to identify the relevant variables in some situations. It would seem that better (motivated) methods need to be proposed for neural networks, especially in the face of highly non-linear problems
- …