67 research outputs found
Sparse machine learning methods with applications in multivariate signal processing
This thesis details theoretical and empirical work that draws from two main subject areas: Machine
Learning (ML) and Digital Signal Processing (DSP). A unified general framework is given for the application
of sparse machine learning methods to multivariate signal processing. In particular, methods that
enforce sparsity will be employed for reasons of computational efficiency, regularisation, and compressibility.
The methods presented can be seen as modular building blocks that can be applied to a variety
of applications. Application specific prior knowledge can be used in various ways, resulting in a flexible
and powerful set of tools. The motivation for the methods is to be able to learn and generalise from a set
of multivariate signals.
In addition to testing on benchmark datasets, a series of empirical evaluations on real world
datasets were carried out. These included: the classification of musical genre from polyphonic audio
files; a study of how the sampling rate in a digital radar can be reduced through the use of Compressed
Sensing (CS); analysis of human perception of different modulations of musical key from
Electroencephalography (EEG) recordings; classification of genre of musical pieces to which a listener
is attending from Magnetoencephalography (MEG) brain recordings. These applications demonstrate
the efficacy of the framework and highlight interesting directions of future research
A practical view of large-scale classification: feature selection and real-time classification
Tesis doctoral inรฉdita, Universidad Autรณnoma de Madrid, Escuela Politรฉcnica Superior, mayo de 201
Design of Machine Learning Algorithms with Applications to Breast Cancer Detection
Machine learning is concerned with the design and development of algorithms and
techniques that allow computers to 'learn' from experience with respect to some class
of tasks and performance measure. One application of machine learning is to improve
the accuracy and efficiency of computer-aided diagnosis systems to assist physician,
radiologists, cardiologists, neuroscientists, and health-care technologists. This thesis
focuses on machine learning and the applications to breast cancer detection. Emphasis
is laid on preprocessing of features, pattern classification, and model selection.
Before the classification task, feature selection and feature transformation may be
performed to reduce the dimensionality of the features and to improve the classification
performance. Genetic algorithm (GA) can be employed for feature selection based
on different measures of data separability or the estimated risk of a chosen classifier.
A separate nonlinear transformation can be performed by applying kernel principal
component analysis and kernel partial least squares.
Different classifiers are proposed in this work: The SOM-RBF network combines
self-organizing maps (SOMs) and radial basis function (RBF) networks, with the RBF
centers set as the weight vectors of neurons from the competitive layer of a trained
SaM. The pairwise Rayleigh quotient (PRQ) classifier seeks one discriminating boundary
by maximizing an unconstrained optimization objective, named as the PRQ criterion,
formed with a set of pairwise const~aints instead of individual training samples.
The strict 2-surface proximal (S2SP) classifier seeks two proximal planes that are not
necessary parallel to fit the distribution of the samples in the original feature space or
a kernel-defined feature space, by ma-ximizing two strict optimization objectives with
a 'square of sum' optimization factor. Two variations of the support vector data description
(SVDD) with negative samples (NSVDD) are proposed by involving different
forms of slack vectors, which learn a closed spherically shaped boundary, named as the
supervised compact hypersphere (SCH), around a set of samples in the target class. \Ve
extend the NSVDDs to solve the multi-class classification problems based on distances
between the samples and the centers of the learned SCHs in a kernel-defined feature
space, using a combination of linear discriminant analysis and the nearest-neighbor rule.
The problem of model selection is studied to pick the best values of the hyperparameters
for a parametric classifier. To choose the optimal kernel or regularization
parameters of a classifier, we investigate different criteria, such as the validation error
estimate and the leave-out-out bound, as well as different optimization methods, such
as grid search, gradient descent, and GA. By viewing the tuning problem of the multiple
parameters of an 2-norm support vector machine (SVM) as an identification problem
of a nonlinear dynamic system, we design a tuning system by employing the extended
Kalman filter based on cross validation. Independent kernel optimization based on
different measures of data separability are a~so investigated for different kernel-based
classifiers.
Numerous computer experiments using the benchmark datasets verify the theoretical
results, make comparisons among the techniques in measures of classification
accuracy or area under the receiver operating characteristics curve. Computational
requirements, such as the computing time and the number of hyper-parameters, are
also discussed.
All of the presented methods are applied to breast cancer detection from fine-needle
aspiration and in mammograms, as well as screening of knee-joint vibroarthrographic
signals and automatic monitoring of roller bearings with vibration signals. Experimental
results demonstrate the excellence of these methods with improved classification
performance.
For breast cancer detection, instead of only providing a binary diagnostic decision
of 'malignant' or 'benign', we propose methods to assign a measure of confidence
of malignancy to an individual mass, by calculating probabilities of being benign and
malignant with a single classifier or a set of classifiers
Two-Stage Fuzzy Multiple Kernel Learning Based on Hilbert-Schmidt Independence Criterion
ยฉ 1993-2012 IEEE. Multiple kernel learning (MKL) is a principled approach to kernel combination and selection for a variety of learning tasks, such as classification, clustering, and dimensionality reduction. In this paper, we develop a novel fuzzy multiple kernel learning model based on the Hilbert-Schmidt independence criterion (HSIC) for classification, which we call HSIC-FMKL. In this model, we first propose an HSIC Lasso-based MKL formulation, which not only has a clear statistical interpretation that minimum redundant kernels with maximum dependence on output labels are found and combined, but also enables the global optimal solution to be computed efficiently by solving a Lasso optimization problem. Since the traditional support vector machine (SVM) is sensitive to outliers or noises in the dataset, fuzzy SVM (FSVM) is used to select the prediction hypothesis once the optimal kernel has been obtained. The main advantage of FSVM is that we can associate a fuzzy membership with each data point such that these data points can have different effects on the training of the learning machine. We propose a new fuzzy membership function using a heuristic strategy based on the HSIC. The proposed HSIC-FMKL is a two-stage kernel learning approach and the HSIC is applied in both stages. We perform extensive experiments on real-world datasets from the UCI benchmark repository and the application domain of computational biology which validate the superiority of the proposed model in terms of prediction accuracy
Comparison of Classification Techniques to Review the plausibility of Adjustments to Multiples-Based Equity Valuations: Empirical Study for South Africa
Statistics and Actuarial Scienc
๋งค๊ฐ๋ถํฌ๊ทผ์ฌ๋ฅผ ํตํ ๊ณต์ ์์คํ ๊ณตํ์์์ ํ๋ฅ ๊ธฐ๊ณํ์ต ์ ๊ทผ๋ฒ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ๊ณต๊ณผ๋ํ ํํ์๋ฌผ๊ณตํ๋ถ, 2021.8. ์ด์ข
๋ฏผ.With the rapid development of measurement technology, higher quality and vast amounts of process data become available. Nevertheless, process data are โscarceโ in many cases as they are sampled only at certain operating conditions while the dimensionality of the system is large. Furthermore, the process data are inherently stochastic due to the internal characteristics of the system or the measurement noises. For this reason, uncertainty is inevitable in process systems, and estimating it becomes a crucial part of engineering tasks as the prediction errors can lead to misguided decisions and cause severe casualties or economic losses. A popular approach to this is applying probabilistic inference techniques that can model the uncertainty in terms of probability. However, most of the existing probabilistic inference techniques are based on recursive sampling, which makes it difficult to use them for industrial applications that require processing a high-dimensional and massive amount of data. To address such an issue, this thesis proposes probabilistic machine learning approaches based on parametric distribution approximation, which can model the uncertainty of the system and circumvent the computational complexity as well. The proposed approach is applied for three major process engineering tasks: process monitoring, system modeling, and process design.
First, a process monitoring framework is proposed that utilizes a probabilistic classifier for fault classification. To enhance the accuracy of the classifier and reduce the computational cost for its training, a feature extraction method called probabilistic manifold learning is developed and applied to the process data ahead of the fault classification. We demonstrate that this manifold approximation process not only reduces the dimensionality of the data but also casts the data into a clustered structure, making the classifier have a low dependency on the type and dimension of the data. By exploiting this property, non-metric information (e.g., fault labels) of the data is effectively incorporated and the diagnosis performance is drastically improved.
Second, a probabilistic modeling approach based on Bayesian neural networks is proposed. The parameters of deep neural networks are transformed into Gaussian distributions and trained using variational inference. The redundancy of the parameter is autonomously inferred during the model training, and insignificant parameters are eliminated a posteriori. Through a verification study, we demonstrate that the proposed approach can not only produce high-fidelity models that describe the stochastic behaviors of the system but also produce the optimal model structure.
Finally, a novel process design framework is proposed based on reinforcement learning. Unlike the conventional optimization methods that recursively evaluate the objective function to find an optimal value, the proposed method approximates the objective function surface by parametric probabilistic distributions. This allows learning the continuous action policy without introducing any cumbersome discretization process. Moreover, the probabilistic policy gives means for effective control of the exploration and exploitation rates according to the certainty information. We demonstrate that the proposed framework can learn process design heuristics during the solution process and use them to solve similar design problems.๊ณ์ธก๊ธฐ์ ์ ๋ฐ๋ฌ๋ก ์์ง์, ๊ทธ๋ฆฌ๊ณ ๋ฐฉ๋ํ ์์ ๊ณต์ ๋ฐ์ดํฐ์ ์ทจ๋์ด ๊ฐ๋ฅํด์ก๋ค. ๊ทธ๋ฌ๋ ๋ง์ ๊ฒฝ์ฐ ์์คํ
์ฐจ์์ ํฌ๊ธฐ์ ๋นํด์ ์ผ๋ถ ์ด์ ์กฐ๊ฑด์ ๊ณต์ ๋ฐ์ดํฐ๋ง์ด ์ทจ๋๋๊ธฐ ๋๋ฌธ์, ๊ณต์ ๋ฐ์ดํฐ๋ โํฌ์โํ๊ฒ ๋๋ค. ๋ฟ๋ง ์๋๋ผ, ๊ณต์ ๋ฐ์ดํฐ๋ ์์คํ
๊ฑฐ๋ ์์ฒด์ ๋๋ถ์ด ๊ณ์ธก์์ ๋ฐ์ํ๋ ๋
ธ์ด์ฆ๋ก ์ธํ ๋ณธ์ง์ ์ธ ํ๋ฅ ์ ๊ฑฐ๋์ ๋ณด์ธ๋ค. ๋ฐ๋ผ์ ์์คํ
์ ์์ธก๋ชจ๋ธ์ ์์ธก ๊ฐ์ ๋ํ ๋ถํ์ค์ฑ์ ์ ๋์ ์ผ๋ก ๊ธฐ์ ํ๋ ๊ฒ์ด ์๊ตฌ๋๋ฉฐ, ์ด๋ฅผ ํตํด ์ค์ง์ ์๋ฐฉํ๊ณ ์ ์ฌ์ ์ธ๋ช
ํผํด์ ๊ฒฝ์ ์ ์์ค์ ๋ฐฉ์งํ ์ ์๋ค. ์ด์ ๋ํ ๋ณดํธ์ ์ธ ์ ๊ทผ๋ฒ์ ํ๋ฅ ์ถ์ ๊ธฐ๋ฒ์ ์ฌ์ฉํ์ฌ ์ด๋ฌํ ๋ถํ์ค์ฑ์ ์ ๋ํ ํ๋ ๊ฒ์ด๋, ํ์กดํ๋ ์ถ์ ๊ธฐ๋ฒ๋ค์ ์ฌ๊ท์ ์ํ๋ง์ ์์กดํ๋ ํน์ฑ์ ๊ณ ์ฐจ์์ด๋ฉด์๋ ๋ค๋์ธ ๊ณต์ ๋ฐ์ดํฐ์ ์ ์ฉํ๊ธฐ ์ด๋ ต๋ค๋ ๊ทผ๋ณธ์ ์ธ ํ๊ณ๋ฅผ ๊ฐ์ง๋ค. ๋ณธ ํ์๋
ผ๋ฌธ์์๋ ๋งค๊ฐ๋ถํฌ๊ทผ์ฌ์ ๊ธฐ๋ฐํ ํ๋ฅ ๊ธฐ๊ณํ์ต์ ์ ์ฉํ์ฌ ์์คํ
์ ๋ด์ฌ๋ ๋ถํ์ค์ฑ์ ๋ชจ๋ธ๋งํ๋ฉด์๋ ๋์์ ๊ณ์ฐ ํจ์จ์ ์ธ ์ ๊ทผ ๋ฐฉ๋ฒ์ ์ ์ํ์๋ค.
๋จผ์ , ๊ณต์ ์ ๋ชจ๋ํฐ๋ง์ ์์ด ๊ฐ์ฐ์์ ํผํฉ ๋ชจ๋ธ (Gaussian mixture model)์ ๋ถ๋ฅ์๋ก ์ฌ์ฉํ๋ ํ๋ฅ ์ ๊ฒฐํจ ๋ถ๋ฅ ํ๋ ์์ํฌ๊ฐ ์ ์๋์๋ค. ์ด๋ ๋ถ๋ฅ์์ ํ์ต์์์ ๊ณ์ฐ ๋ณต์ก๋๋ฅผ ์ค์ด๊ธฐ ์ํ์ฌ ๋ฐ์ดํฐ๋ฅผ ์ ์ฐจ์์ผ๋ก ํฌ์์ํค๋๋ฐ, ์ด๋ฅผ ์ํ ํ๋ฅ ์ ๋ค์์ฒด ํ์ต (probabilistic manifold learn-ing) ๋ฐฉ๋ฒ์ด ์ ์๋์๋ค. ์ ์ํ๋ ๋ฐฉ๋ฒ์ ๋ฐ์ดํฐ์ ๋ค์์ฒด (manifold)๋ฅผ ๊ทผ์ฌํ์ฌ ๋ฐ์ดํฐ ํฌ์ธํธ ์ฌ์ด์ ์๋ณ ์ฐ๋ (pairwise likelihood)๋ฅผ ๋ณด์กดํ๋ ํฌ์๋ฒ์ด ์ฌ์ฉ๋๋ค. ์ด๋ฅผ ํตํ์ฌ ๋ฐ์ดํฐ์ ์ข
๋ฅ์ ์ฐจ์์ ์์กด๋๊ฐ ๋ฎ์ ์ง๋จ ๊ฒฐ๊ณผ๋ฅผ ์ป์๊ณผ ๋์์ ๋ฐ์ดํฐ ๋ ์ด๋ธ๊ณผ ๊ฐ์ ๋น๊ฑฐ๋ฆฌ์ (non-metric) ์ ๋ณด๋ฅผ ํจ์จ์ ์ผ๋ก ์ฌ์ฉํ์ฌ ๊ฒฐํจ ์ง๋จ ๋ฅ๋ ฅ์ ํฅ์์ํฌ ์ ์์์ ๋ณด์๋ค.
๋์งธ๋ก, ๋ฒ ์ด์ง์ ์ฌ์ธต ์ ๊ฒฝ๋ง(Bayesian deep neural networks)์ ์ฌ์ฉํ ๊ณต์ ์ ํ๋ฅ ์ ๋ชจ๋ธ๋ง ๋ฐฉ๋ฒ๋ก ์ด ์ ์๋์๋ค. ์ ๊ฒฝ๋ง์ ๊ฐ ๋งค๊ฐ๋ณ์๋ ๊ฐ์ฐ์ค ๋ถํฌ๋ก ์นํ๋๋ฉฐ, ๋ณ๋ถ์ถ๋ก (variational inference)์ ํตํ์ฌ ๊ณ์ฐ ํจ์จ์ ์ธ ํ๋ จ์ด ์งํ๋๋ค. ํ๋ จ์ด ๋๋ ํ ํ๋ผ๋ฏธํฐ์ ์ ํจ์ฑ์ ์ธก์ ํ์ฌ ๋ถํ์ํ ๋งค๊ฐ๋ณ์๋ฅผ ์๊ฑฐํ๋ ์ฌํ ๋ชจ๋ธ ์์ถ ๋ฐฉ๋ฒ์ด ์ฌ์ฉ๋์๋ค. ๋ฐ๋์ฒด ๊ณต์ ์ ๋ํ ์ฌ๋ก ์ฐ๊ตฌ๋ ์ ์ํ๋ ๋ฐฉ๋ฒ์ด ๊ณต์ ์ ๋ณต์กํ ๊ฑฐ๋์ ํจ๊ณผ์ ์ผ๋ก ๋ชจ๋ธ๋ง ํ ๋ฟ๋ง ์๋๋ผ ๋ชจ๋ธ์ ์ต์ ๊ตฌ์กฐ๋ฅผ ๋์ถํ ์ ์์์ ๋ณด์ฌ์ค๋ค.
๋ง์ง๋ง์ผ๋ก, ๋ถํฌํ ์ฌ์ธต ์ ๊ฒฝ๋ง์ ์ฌ์ฉํ ๊ฐํํ์ต์ ๊ธฐ๋ฐ์ผ๋ก ํ ํ๋ฅ ์ ๊ณต์ ์ค๊ณ ํ๋ ์์ํฌ๊ฐ ์ ์๋์๋ค. ์ต์ ์น๋ฅผ ์ฐพ๊ธฐ ์ํด ์ฌ๊ท์ ์ผ๋ก ๋ชฉ์ ํจ์ ๊ฐ์ ํ๊ฐํ๋ ๊ธฐ์กด์ ์ต์ ํ ๋ฐฉ๋ฒ๋ก ๊ณผ ๋ฌ๋ฆฌ, ๋ชฉ์ ํจ์ ๊ณก๋ฉด (objective function surface)์ ๋งค๊ฐํ ๋ ํ๋ฅ ๋ถํฌ๋ก ๊ทผ์ฌํ๋ ์ ๊ทผ๋ฒ์ด ์ ์๋์๋ค. ์ด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ์ด์ฐํ (discretization)๋ฅผ ์ฌ์ฉํ์ง ์๊ณ ์ฐ์์ ํ๋ ์ ์ฑ
์ ํ์ตํ๋ฉฐ, ํ์ค์ฑ (certainty)์ ๊ธฐ๋ฐํ ํ์ (exploration) ๋ฐ ํ์ฉ (exploi-tation) ๋น์จ์ ์ ์ด๊ฐ ํจ์จ์ ์ผ๋ก ์ด๋ฃจ์ด์ง๋ค. ์ฌ๋ก ์ฐ๊ตฌ ๊ฒฐ๊ณผ๋ ๊ณต์ ์ ์ค๊ณ์ ๋ํ ๊ฒฝํ์ง์ (heuristic)์ ํ์ตํ๊ณ ์ ์ฌํ ์ค๊ณ ๋ฌธ์ ์ ํด๋ฅผ ๊ตฌํ๋ ๋ฐ ์ด์ฉํ ์ ์์์ ๋ณด์ฌ์ค๋ค.Chapter 1 Introduction 1
1.1. Motivation 1
1.2. Outline of the thesis 5
Chapter 2 Backgrounds and preliminaries 9
2.1. Bayesian inference 9
2.2. Monte Carlo 10
2.3. Kullback-Leibler divergence 11
2.4. Variational inference 12
2.5. Riemannian manifold 13
2.6. Finite extended-pseudo-metric space 16
2.7. Reinforcement learning 16
2.8. Directed graph 19
Chapter 3 Process monitoring and fault classification with probabilistic manifold learning 20
3.1. Introduction 20
3.2. Methods 25
3.2.1. Uniform manifold approximation 27
3.2.2. Clusterization 28
3.2.3. Projection 31
3.2.4. Mapping of unknown data query 32
3.2.5. Inference 33
3.3. Verification study 38
3.3.1. Dataset description 38
3.3.2. Experimental setup 40
3.3.3. Process monitoring 43
3.3.4. Projection characteristics 47
3.3.5. Fault diagnosis 50
3.3.6. Computational Aspects 56
Chapter 4 Process system modeling with Bayesian neural networks 59
4.1. Introduction 59
4.2. Methods 63
4.2.1. Long Short-Term Memory (LSTM) 63
4.2.2. Bayesian LSTM (BLSTM) 66
4.3. Verification study 68
4.3.1. System description 68
4.3.2. Estimation of the plasma variables 71
4.3.3. Dataset description 72
4.3.4. Experimental setup 72
4.3.5. Weight regularization during training 78
4.3.6. Modeling complex behaviors of the system 80
4.3.7. Uncertainty quantification and model compression 85
Chapter 5 Process design based on reinforcement learning with distributional actor-critic networks 89
5.1. Introduction 89
5.2. Methods 93
5.2.1. Flowsheet hashing 93
5.2.2. Behavioral cloning 99
5.2.3. Neural Monte Carlo tree search (N-MCTS) 100
5.2.4. Distributional actor-critic networks (DACN) 105
5.2.5. Action masking 110
5.3. Verification study 110
5.3.1. System description 110
5.3.2. Experimental setup 111
5.3.3. Result and discussions 115
Chapter 6 Concluding remarks 120
6.1. Summary of the contributions 120
6.2. Future works 122
Appendix 125
A.1. Proof of Lemma 1 125
A.2. Performance indices for dimension reduction 127
A.3. Model equations for process units 130
Bibliography 132
์ด ๋ก 149๋ฐ
Data Reduction Algorithms in Machine Learning and Data Science
Raw data are usually required to be pre-processed for better representation or discrimination of classes. This pre-processing can be done by data reduction, i.e., either reduction in dimensionality or numerosity (cardinality). Dimensionality reduction can be used for feature extraction or data visualization. Numerosity reduction is useful for ranking data points or finding the most and least important data points. This thesis proposes several algorithms for data reduction, known as dimensionality and numerosity reduction, in machine learning and data science. Dimensionality reduction tackles feature extraction and feature selection methods while numerosity reduction includes prototype selection and prototype generation approaches. This thesis focuses on feature extraction and prototype selection for data reduction. Dimensionality reduction methods can be divided into three categories, i.e., spectral, probabilistic, and neural network-based methods. The spectral methods have a geometrical point of view and are mostly reduced to the generalized eigenvalue problem. Probabilistic and network-based methods have stochastic and information theoretic foundations, respectively. Numerosity reduction methods can be divided into methods based on variance, geometry, and isolation.
For dimensionality reduction, under the spectral category, I propose weighted Fisher discriminant analysis, Roweis discriminant analysis, and image quality aware embedding. I also propose quantile-quantile embedding as a probabilistic method where the distribution of embedding is chosen by the user. Backprojection, Fisher losses, and dynamic triplet sampling using Bayesian updating are other proposed methods in the neural network-based category. Backprojection is for training shallow networks with a projection-based perspective in manifold learning. Two Fisher losses are proposed for training Siamese triplet networks for increasing and decreasing the inter- and intra-class variances, respectively. Two dynamic triplet mining methods, which are based on Bayesian updating to draw triplet samples stochastically, are proposed. For numerosity reduction, principal sample analysis and instance ranking by matrix decomposition are the proposed variance-based methods; these methods rank instances using inter-/intra-class variances and matrix factorization, respectively. Curvature anomaly detection, in which the points are assumed to be the vertices of polyhedron, and isolation Mondrian forest are the proposed methods based on geometry and isolation, respectively.
To assess the proposed tools developed for data reduction, I apply them to some applications in medical image analysis, image processing, and computer vision. Data reduction, used as a pre-processing tool, has different applications because it provides various ways of feature extraction and prototype selection for applying to different types of data. Dimensionality reduction extracts informative features and prototype selection selects the most informative data instances. For example, for medical image analysis, I use Fisher losses and dynamic triplet sampling for embedding histopathology image patches and demonstrating how different the tumorous cancer tissue types are from the normal ones. I also propose offline/online triplet mining using extreme distances for this embedding. In image processing and computer vision application, I propose Roweisfaces and Roweisposes for face recognition and 3D action recognition, respectively, using my proposed Roweis discriminant analysis method. I also introduce the concepts of anomaly landscape and anomaly path using the proposed curvature anomaly detection and use them to denoise images and video frames. I report extensive experiments, on different datasets, to show the effectiveness of the proposed algorithms. By experiments, I demonstrate that the proposed methods are useful for extracting informative features and instances for better accuracy, representation, prediction, class separation, data reduction, and embedding. I show that the proposed dimensionality reduction methods can extract informative features for better separation of classes. An example is obtaining an embedding space for separating cancer histopathology patches from the normal patches which helps hospitals diagnose cancers more easily in an automatic way. I also show that the proposed numerosity reduction methods are useful for ranking data instances based on their importance and reducing data volumes without a significant drop in performance of machine learning and data science algorithms
Learning and Testing Powerful Hypotheses
Progress in science is driven through the formulation of hypotheses about phenomena of interest and by
collecting evidence for their validity or refuting them. While some hypotheses are amenable to deductive
proofs, other hypotheses can only be accessed in a data-driven manner. For most phenomena, scientists cannot
control all degrees of freedom and hence data is often inherently stochastic. This stochasticity disallows to
test hypotheses with absolute certainty. The field of statistical hypothesis testing formalizes the probabilistic
assessment of hypotheses, enabling researchers to control the error rates, for example, at which they reject a
true hypothesis, while aiming to reject false hypotheses as often as possible.
But how do we come up with promising hypotheses, and how can we test them efficiently? Can we use
machine learning systems to automatically generate promising hypotheses? This thesis studies different
aspects of this question.
A simple rule for statistical hypothesis testing states that one should not peek at the data when formulating
a hypothesis. This is indeed true if done naively, that is, when the hypothesis is then simply tested with
the data as if one had not looked at it yet. However, we show that in principle using the same data for
learning the hypothesis and testing it is feasible if we can correct for the selection of the hypothesis. We treat
this in the case of the two-sample problem. Given two samples, the hypothesis to be tested is whether the
samples originate from the same distribution. We can reformulate this by testing whether the maximum
mean discrepancy over a (unit ball of a) reproducing kernel Hilbert space is zero. We show that we can
learn the kernel function, hence the exact test we use, and perform the test with the same data, while still
correctly controlling the Type-I error rates. Likewise, we demonstrate experimentally that taking all data into
account can lead to more powerful testing procedures than the data splitting approach. However, deriving
the formulae that correct for the selection procedure requires strong assumptions, which are only valid for a
specific, the linear-time, estimate of the maximum mean discrepancy. In more general settings it is difficult, if
not impossible, to adjust for the selection.
We thus also analyze the case where we split the data and use part of it to learn a test statistic. The maximum
mean discrepancy implicitly optimizes a mean discrepancy over the unit ball of a reproducing kernel Hilbert
space, and often the kernel itself is optimized on held-out data.We instead propose to optimize a witness
function directly on held-out data and use its mean discrepancy as a test statistic. This allows us to directly
maximize the test power, simplifies the theoretical treatment, and makes testing more efficient.We provide
and implement algorithms to learn the test statistics. Furthermore, we show analytically that the optimization
objective to learn powerful tests for the two-sample problem is closely related to the objectives used in
standard supervised learning tasks, namely the least-square loss and cross-entropy loss. This allows us to
indeed use existing machine learning tools when learning powerful hypotheses. Furthermore, since we use
held-out data for learning the test statistic, we can use any kind of model-selection and cross-validation
techniques to maximize the performance. To facilitate this for practitioners, we provide an open-source
Python package โautotstโ implementing an interface to existing libraries and running the whole testing
pipeline, including the learning of the hypothesis. Our presented methods reach state-of-the-art performance
on two-sample testing tasks. We also show how to trade off the computational resources required for the test
by sacrificing some statistical power, which can be important in practice. Furthermore, our test easily allows
interpreting the results.
Having more computational power potentially allows extracting more information from data and thus obtain
more significant results. Hence, investigating whether quantum computers can help in machine learning tasks
has gained popularity over the past years. We investigate this in light of the two-sample problem. We define
the quantum mean embedding, mapping probability distributions onto quantum states, and analyze when
this mapping is injective. While this is conceptually interesting on its own, we do not find a straight-forward
way of harnessing any speed-up. The main problem here is that there is no known way to efficiently create
the quantum mean embedding. On the contrary, fundamental results in quantum information theory show
that this might generally be hard to do.
For two-sample testing, the usage of reproducing kernel Hilbert spaces has been established for many years
and proven important both theoretically and practically. In this case, we thus focused on practically relevant
aspects to make the tests as powerful and easy to use as possible. For other hypothesis testing tasks, the
usage of advanced machine learning tools still lags far behind. For specification tests based on conditional
moment restrictions, popular in econometrics, we do the first steps by defining a consistent test based on
kernel methods. Our test already has promising performance, but optimizing it, potentially with the other
insights gained in this thesis, is an open task
- โฆ