285,392 research outputs found
Detecting and approximating decision boundaries in low dimensional spaces
A method for detecting and approximating fault lines or surfaces,
respectively, or decision curves in two and three dimensions with guaranteed
accuracy is presented. Reformulated as a classification problem, our method
starts from a set of scattered points along with the corresponding
classification algorithm to construct a representation of a decision curve by
points with prescribed maximal distance to the true decision curve. Hereby, our
algorithm ensures that the representing point set covers the decision curve in
its entire extent and features local refinement based on the geometric
properties of the decision curve. We demonstrate applications of our method to
problems related to the detection of faults, to Multi-Criteria Decision Aid
and, in combination with Kirsch's factorization method, to solving an inverse
acoustic scattering problem. In all applications we considered in this work,
our method requires significantly less pointwise classifications than
previously employed algorithms
Modern Machine Learning for LHC Physicists
Modern machine learning is transforming particle physics, faster than we can
follow, and bullying its way into our numerical tool box. For young researchers
it is crucial to stay on top of this development, which means applying
cutting-edge methods and tools to the full range of LHC physics problems. These
lecture notes are meant to lead students with basic knowledge of particle
physics and significant enthusiasm for machine learning to relevant
applications as fast as possible. They start with an LHC-specific motivation
and a non-standard introduction to neural networks and then cover
classification, unsupervised classification, generative networks, and inverse
problems. Two themes defining much of the discussion are well-defined loss
functions reflecting the problem at hand and uncertainty-aware networks. As
part of the applications, the notes include some aspects of theoretical LHC
physics. All examples are chosen from particle physics publications of the last
few years. Given that these notes will be outdated already at the time of
submission, the week of ML4Jets 2022, they will be updated frequently.Comment: First version, we very much appreciate feedbac
Bayesian Learning for Earthquake Engineering Applications and Structural Health Monitoring
Parallel to significant advances in sensor hardware, there have been recent developments
of sophisticated methods for quantitative assessment of measured data that
explicitly deal with all of the involved uncertainties, including inevitable measurement
errors. The existence of these uncertainties often causes numerical instabilities
in inverse problems that make them ill-conditioned.
The Bayesian methodology is known to provide an efficient way to alleviate this illconditioning
by incorporating the prior term for regularization of the inverse problem,
and to provide probabilistic results which are meaningful for decision making.
In this work, the Bayesian methodology is applied to inverse problems in earthquake
engineering and especially to structural health monitoring. The proposed
methodology of Bayesian learning using automatic relevance determination (ARD)
prior, including its kernel version called the Relevance Vector Machine, is presented
and applied to earthquake early warning, earthquake ground motion attenuation estimation,
and structural health monitoring, using either a Bayesian classification or
regression approach.
The classification and regression are both performed in three phases: (1) Phase
I (feature extraction phase): Determine which features from the data to use in a
training dataset; (2) Phase II (training phase): Identify the unknown parameters
defining a model by using a training dataset; and (3) Phase III (prediction phase):
Predict the results based on the features from new data.
This work focuses on the advantages of making probabilistic predictions obtained
by Bayesian methods to deal with all uncertainties and the good characteristics of
the proposed method in terms of computationally efficient training, and, especially,
vi
prediction that make it suitable for real-time operation. It is shown that sparseness
(using only smaller number of basis function terms) is produced in the regression
equations and classification separating boundary by using the ARD prior along with
Bayesian model class selection to select the most probable (plausible) model class
based on the data. This model class selection procedure automatically produces
optimal regularization of the problem at hand, making it well-conditioned.
Several applications of the proposed Bayesian learning methodology are presented.
First, automatic near-source and far-source classification of incoming ground motion
signals is treated and the Bayesian learning method is used to determine which ground
motion features are optimal for this classification. Second, a probabilistic earthquake
attenuation model for peak ground acceleration is identified using selected optimal
features, especially taking a non-linearly involved parameter into consideration. It is
shown that the Bayesian learning method can be utilized to estimate not only linear
coefficients but also a non-linearly involved parameter to provide an estimate for
an unknown parameter in the kernel basis functions for Relevance Vector Machine.
Third, the proposed method is extended to a general case of regression problems
with vector outputs and applied to structural health monitoring applications. It
is concluded that the proposed vector output RVM shows promise for estimating
damage locations and their severities from change of modal properties such as natural
frequencies and mode shapes
Weighted Chebyshev Distance Algorithms for Hyperspectral Target Detection and Classification Applications
In this study, an efficient spectral similarity method referred to as Weighted Chebyshev Distance (WCD) is introduced for supervised classification of hyperspectral imagery (HSI) and target detection applications. The WCD is based on a simple spectral similarity based decision rule using limited amount of reference data. The estimation of upper and lower spectral boundaries of spectral signatures for all classes across spectral bands is referred to as a vector tunnel (VT). To obtain the reference information, the training signatures are provided randomly from existing data for a known class. After determination of the parameters of the WCD algorithm with the training set, classification or detection procedures are accomplished at each pixel. The comparative performances of the algorithms are tested under various cases.
The decision criterion for classification of an input vector is based on choosing its class corresponding to the narrowest VT that the input vector fits in to. This is also shown to be approximated by the WCD in which the weights are chosen as an inverse power of the generalized standard deviation per spectral band. In computer experiments, the WCD classifier is compared with the Euclidian Distance (ED) classifier and the Spectral Angle Map (SAM) classifier.
The WCD algorithm is also used for HSI target detection purpose. Target detection problem is considered as a two-class classification problem. The WCD is characterized only by the target class spectral information. Then, this method is compared with ED, SAM, Spectral Matched Filter (SMF), Adaptive Cosine Estimator (ACE) and Support Vector Machine (SVM) algorithms. During these studies, threshold levels are evaluated based on the Receiver Operating Characteristic Curves (ROC)
Classification of chirp signals using hierarchical bayesian learning and MCMC methods
This paper addresses the problem of classifying chirp signals using hierarchical Bayesian learning together with Markov chain Monte Carlo (MCMC) methods. Bayesian learning consists of estimating the distribution of the observed data conditional on each class from a set of training samples. Unfortunately, this estimation requires to evaluate intractable multidimensional integrals. This paper studies an original implementation of hierarchical Bayesian learning that estimates the class conditional probability densities using MCMC methods. The performance of this implementation is first studied via an academic example for which the class conditional densities are known. The problem of classifying chirp signals is then addressed by using a similar hierarchical Bayesian learning implementation based on a Metropolis-within-Gibbs algorithm
Task-Driven Dictionary Learning
Modeling data with linear combinations of a few elements from a learned
dictionary has been the focus of much recent research in machine learning,
neuroscience and signal processing. For signals such as natural images that
admit such sparse representations, it is now well established that these models
are well suited to restoration tasks. In this context, learning the dictionary
amounts to solving a large-scale matrix factorization problem, which can be
done efficiently with classical optimization tools. The same approach has also
been used for learning features from data for other purposes, e.g., image
classification, but tuning the dictionary in a supervised way for these tasks
has proven to be more difficult. In this paper, we present a general
formulation for supervised dictionary learning adapted to a wide variety of
tasks, and present an efficient algorithm for solving the corresponding
optimization problem. Experiments on handwritten digit classification, digital
art identification, nonlinear inverse image problems, and compressed sensing
demonstrate that our approach is effective in large-scale settings, and is well
suited to supervised and semi-supervised classification, as well as regression
tasks for data that admit sparse representations.Comment: final draft post-refereein
- …