1,143 research outputs found
Parsimonious Mahalanobis Kernel for the Classification of High Dimensional Data
The classification of high dimensional data with kernel methods is considered
in this article. Exploit- ing the emptiness property of high dimensional
spaces, a kernel based on the Mahalanobis distance is proposed. The computation
of the Mahalanobis distance requires the inversion of a covariance matrix. In
high dimensional spaces, the estimated covariance matrix is ill-conditioned and
its inversion is unstable or impossible. Using a parsimonious statistical
model, namely the High Dimensional Discriminant Analysis model, the specific
signal and noise subspaces are estimated for each considered class making the
inverse of the class specific covariance matrix explicit and stable, leading to
the definition of a parsimonious Mahalanobis kernel. A SVM based framework is
used for selecting the hyperparameters of the parsimonious Mahalanobis kernel
by optimizing the so-called radius-margin bound. Experimental results on three
high dimensional data sets show that the proposed kernel is suitable for
classifying high dimensional data, providing better classification accuracies
than the conventional Gaussian kernel
Recommended from our members
Continuous learning of analytical and machine learning rate of penetration (ROP) models for real-time drilling optimization
Oil and gas operators strive to reach hydrocarbon reserves by drilling wells in the safest and fastest possible manner, providing indispensable energy to society at reduced costs while maintaining environmental sustainability. Real-time drilling optimization consists of selecting operational drilling parameters that maximize a desirable measure of drilling performance. Drilling optimization efforts often aspire to improve drilling speed, commonly referred to as rate of penetration (ROP). ROP is a function of the forces and moments applied to the bit, in addition to mud, formation, bit and hydraulic properties. Three operational drilling parameters may be constantly adjusted at surface to influence ROP towards a drilling objective: weight on bit (WOB), drillstring rotational speed (RPM), and drilling fluid (mud) flow rate. In the traditional, analytical approach to ROP modeling, inflexible equations relate WOB, RPM, flow rate and/or other measurable drilling parameters to ROP and empirical model coefficients are computed for each rock formation to best fit field data. Over the last decade, enhanced data acquisition technology and widespread cheap computational power have driven a surge in applications of machine learning (ML) techniques to ROP prediction. Machine learning algorithms leverage statistics to uncover relations between any prescribed inputs (features/predictors) and the quantity of interest (response). The biggest advantage of ML algorithms over analytical models is their flexibility in model form. With no set equation, ML models permit segmentation of the drilling operational parameter space. However, increased model complexity diminishes interpretability of how an adjustment to the inputs will affect the output. There is no single ROP model applicable in every situation. This study investigates all stages of the drilling optimization workflow, with emphasis on real-time continuous model learning. Sensors constantly record data as wells are drilled, and it is postulated that ROP models can be retrained in real-time to adapt to changing drilling conditions. Cross-validation is assessed as a methodology to select the best performing ROP model for each drilling optimization interval in real-time. Constrained to rig equipment and operational limitations, drilling parameters are optimized in intervals with the most accurate ROP model determined by cross-validation. Dynamic range and full range training data segmentation techniques contest the classical lithology-dependent approach to ROP modeling. Spatial proximity and parameter similarity sample weighting expand data partitioning capabilities during model training. The prescribed ROP modeling and drilling parameter optimization scenarios are evaluated according to model performance, ROP improvements and computational expensePetroleum and Geosystems Engineerin
Duality, Derivative-Based Training Methods and Hyperparameter Optimization for Support Vector Machines
In this thesis we consider the application of Fenchel's duality theory and gradient-based methods for the training and hyperparameter optimization of Support Vector Machines. We show that the dualization of convex training problems is possible theoretically in a rather general formulation. For training problems following a special structure (for instance, standard training problems) we find that the resulting optimality conditions can be interpreted concretely. This approach immediately leads to the well-known notion of support vectors and a formulation of the Representer Theorem. The proposed theory is applied to several examples such that dual formulations of training problems and associated optimality conditions can be derived straightforwardly. Furthermore, we consider different formulations of the primal training problem which are equivalent under certain conditions. We also argue that the relation of the corresponding solutions to the solution of the dual training problem is not always intuitive. Based on the previous findings, we consider the application of customized optimization methods to the primal and dual training problems. A particular realization of Newton's method is derived which could be used to solve the primal training problem accurately. Moreover, we introduce a general convergence framework covering different types of decomposition methods for the solution of the dual training problem. In doing so, we are able to generalize well-known convergence results for the SMO method. Additionally, a discussion of the complexity of the SMO method and a motivation for a shrinking strategy reducing the computational effort is provided. In a last theoretical part, we consider the problem of hyperparameter optimization. We argue that this problem can be handled efficiently by means of gradient-based methods if the training problems are formulated appropriately. Finally, we evaluate the theoretical results concerning the training and hyperparameter optimization approaches practically by means of several example training problems
- …