7 research outputs found
Selecting the Number of Clusters with a Stability Trade-off: an Internal Validation Criterion
Model selection is a major challenge in non-parametric clustering. There is
no universally admitted way to evaluate clustering results for the obvious
reason that there is no ground truth against which results could be tested, as
in supervised learning. The difficulty to find a universal evaluation criterion
is a direct consequence of the fundamentally ill-defined objective of
clustering. In this perspective, clustering stability has emerged as a natural
and model-agnostic principle: an algorithm should find stable structures in the
data. If data sets are repeatedly sampled from the same underlying
distribution, an algorithm should find similar partitions. However, it turns
out that stability alone is not a well-suited tool to determine the number of
clusters. For instance, it is unable to detect if the number of clusters is too
small. We propose a new principle for clustering validation: a good clustering
should be stable, and within each cluster, there should exist no stable
partition. This principle leads to a novel internal clustering validity
criterion based on between-cluster and within-cluster stability, overcoming
limitations of previous stability-based methods. We empirically show the
superior ability of additive noise to discover structures, compared with
sampling-based perturbation. We demonstrate the effectiveness of our method for
selecting the number of clusters through a large number of experiments and
compare it with existing evaluation methods.Comment: 43 page
Structured Prediction on Dirty Datasets
Many errors cannot be detected or repaired without taking into account the underlying structure and dependencies in the dataset. One way of modeling the structure of the data is graphical models. Graphical models combine probability theory and graph theory in order to address one of the key objectives in designing and fitting probabilistic models, which is to capture dependencies among relevant random variables. Structure representation helps to understand the side effect of the errors or it reveals correct interrelationships between data points. Hence, principled representation of structure in prediction and cleaning tasks of dirty data is essential for the quality of downstream analytical results. Existing structured prediction research considers limited structures and configurations, with little attention to the performance limitations and how well the problem can be solved in more general settings where the structure is complex and rich.
In this dissertation, I present the following thesis: By leveraging the underlying dependency and structure in machine learning models, we can effectively detect and clean errors via pragmatic structured predictions techniques. To highlight the main contributions: I investigate prediction algorithms and systems on dirty data with a more realistic structure and dependencies to help deploy this type of learning in more pragmatic settings. Specifically, We introduce a few-shot learning framework for error detection that uses structure-based features of data such as denial constraints violations and Bayesian network as co-occurrence feature. I have studied the problem of recovering the latent ground truth labeling of a structured instance. Then, I consider the problem of mining integrity constraints from data and specifically using the sampling methods for extracting approximate denial constraints. Finally, I have introduced an ML framework that uses solitary and structured data features to solve the problem of record fusion