7,931 research outputs found
Support Spinor Machine
We generalize a support vector machine to a support spinor machine by using
the mathematical structure of wedge product over vector machine in order to
extend field from vector field to spinor field. The separated hyperplane is
extended to Kolmogorov space in time series data which allow us to extend a
structure of support vector machine to a support tensor machine and a support
tensor machine moduli space. Our performance test on support spinor machine is
done over one class classification of end point in physiology state of time
series data after empirical mode analysis and compared with support vector
machine test. We implement algorithm of support spinor machine by using
Holo-Hilbert amplitude modulation for fully nonlinear and nonstationary time
series data analysis.Comment: 18 pages, 12 figures, 6 table
Complex Support Vector Machines for Regression and Quaternary Classification
The paper presents a new framework for complex Support Vector Regression as
well as Support Vector Machines for quaternary classification. The method
exploits the notion of widely linear estimation to model the input-out relation
for complex-valued data and considers two cases: a) the complex data are split
into their real and imaginary parts and a typical real kernel is employed to
map the complex data to a complexified feature space and b) a pure complex
kernel is used to directly map the data to the induced complex feature space.
The recently developed Wirtinger's calculus on complex reproducing kernel
Hilbert spaces (RKHS) is employed in order to compute the Lagrangian and derive
the dual optimization problem. As one of our major results, we prove that any
complex SVM/SVR task is equivalent with solving two real SVM/SVR tasks
exploiting a specific real kernel which is generated by the chosen complex
kernel. In particular, the case of pure complex kernels leads to the generation
of new kernels, which have not been considered before. In the classification
case, the proposed framework inherently splits the complex space into four
parts. This leads naturally in solving the four class-task (quaternary
classification), instead of the typical two classes of the real SVM. In turn,
this rationale can be used in a multiclass problem as a split-class scenario
based on four classes, as opposed to the one-versus-all method; this can lead
to significant computational savings. Experiments demonstrate the effectiveness
of the proposed framework for regression and classification tasks that involve
complex data.Comment: Manuscript accepted in IEEE Transactions on Neural Networks and
Learning System
Applications of Clifford's Geometric Algebra
We survey the development of Clifford's geometric algebra and some of its
engineering applications during the last 15 years. Several recently developed
applications and their merits are discussed in some detail. We thus hope to
clearly demonstrate the benefit of developing problem solutions in a unified
framework for algebra and geometry with the widest possible scope: from quantum
computing and electromagnetism to satellite navigation, from neural computing
to camera geometry, image processing, robotics and beyond.Comment: 26 pages, 91 reference
Regularization approaches for support vector machines with applications to biomedical data
The support vector machine (SVM) is a widely used machine learning tool for
classification based on statistical learning theory. Given a set of training
data, the SVM finds a hyperplane that separates two different classes of data
points by the largest distance. While the standard form of SVM uses L2-norm
regularization, other regularization approaches are particularly attractive for
biomedical datasets where, for example, sparsity and interpretability of the
classifier's coefficient values are highly desired features. Therefore, in this
paper we consider different types of regularization approaches for SVMs, and
explore them in both synthetic and real biomedical datasets
Recognizing Abnormal Heart Sounds Using Deep Learning
The work presented here applies deep learning to the task of automated
cardiac auscultation, i.e. recognizing abnormalities in heart sounds. We
describe an automated heart sound classification algorithm that combines the
use of time-frequency heat map representations with a deep convolutional neural
network (CNN). Given the cost-sensitive nature of misclassification, our CNN
architecture is trained using a modified loss function that directly optimizes
the trade-off between sensitivity and specificity. We evaluated our algorithm
at the 2016 PhysioNet Computing in Cardiology challenge where the objective was
to accurately classify normal and abnormal heart sounds from single, short,
potentially noisy recordings. Our entry to the challenge achieved a final
specificity of 0.95, sensitivity of 0.73 and overall score of 0.84. We achieved
the greatest specificity score out of all challenge entries and, using just a
single CNN, our algorithm differed in overall score by only 0.02 compared to
the top place finisher, which used an ensemble approach.Comment: IJCAI 2017 Knowledge Discovery in Healthcare Worksho
Abductive reasoning as the basis to reproduce expert criteria in ECG Atrial Fibrillation identification
Objective: This work aims at providing a new method for the automatic
detection of atrial fibrillation, other arrhythmia and noise on short single
lead ECG signals, emphasizing the importance of the interpretability of the
classification results.
Approach: A morphological and rhythm description of the cardiac behavior is
obtained by a knowledge-based interpretation of the signal using the
\textit{Construe} abductive framework. Then, a set of meaningful features are
extracted for each individual heartbeat and as a summary of the full record.
The feature distributions were used to elucidate the expert criteria underlying
the labeling of the 2017 Physionet/CinC Challenge dataset, enabling a manual
partial relabeling to improve the consistency of the classification rules.
Finally, state-of-the-art machine learning methods are combined to provide an
answer on the basis of the feature values.
Main results: The proposal tied for the first place in the official stage of
the Challenge, with a combined score of 0.83, and was even improved in
the follow-up stage to 0.85 with a significant simplification of the model.
Significance: This approach demonstrates the potential of \textit{Construe}
to provide robust and valuable descriptions of temporal data even with
significant amounts of noise and artifacts. Also, we discuss the importance of
a consistent classification criteria in manually labeled training datasets, and
the fundamental advantages of knowledge-based approaches to formalize and
validate that criteria.Comment: 15 pages, 6 figures, 6 table
An Overview of Machine Learning Approaches in Wireless Mesh Networks
Wireless Mesh Networks (WMNs) have been extensively studied for nearly two
decades as one of the most promising candidates expected to power the high
bandwidth, high coverage wireless networks of the future. However, consumer
demand for such networks has only recently caught up, rendering efforts at
optimizing WMNs to support high capacities and offer high QoS, while being
secure and fault tolerant, more important than ever. To this end, a recent
trend has been the application of Machine Learning (ML) to solve various design
and management tasks related to WMNs. In this work, we discuss key ML
techniques and analyze how past efforts have applied them in WMNs, while noting
some existing issues and suggesting potential solutions. We also provide
directions on how ML could advance future research and examine recent
developments in the field
Exponential Families for Conditional Random Fields
In this paper we de ne conditional random elds in reproducing kernel Hilbert
spaces and show connections to Gaussian Process classi cation. More speci
cally, we prove decomposition results for undirected graphical models and we
give constructions for kernels. Finally we present e cient means of solving the
optimization problem using reduced rank decompositions and we show how
stationarity can be exploited e ciently in the optimization process.Comment: Appears in Proceedings of the Twentieth Conference on Uncertainty in
Artificial Intelligence (UAI2004
Random Warping Series: A Random Features Method for Time-Series Embedding
Time series data analytics has been a problem of substantial interests for
decades, and Dynamic Time Warping (DTW) has been the most widely adopted
technique to measure dissimilarity between time series. A number of
global-alignment kernels have since been proposed in the spirit of DTW to
extend its use to kernel-based estimation method such as support vector
machine. However, those kernels suffer from diagonal dominance of the Gram
matrix and a quadratic complexity w.r.t. the sample size. In this work, we
study a family of alignment-aware positive definite (p.d.) kernels, with its
feature embedding given by a distribution of \emph{Random Warping Series
(RWS)}. The proposed kernel does not suffer from the issue of diagonal
dominance while naturally enjoys a \emph{Random Features} (RF) approximation,
which reduces the computational complexity of existing DTW-based techniques
from quadratic to linear in terms of both the number and the length of
time-series. We also study the convergence of the RF approximation for the
domain of time series of unbounded length. Our extensive experiments on 16
benchmark datasets demonstrate that RWS outperforms or matches state-of-the-art
classification and clustering methods in both accuracy and computational time.
Our code and data is available at {
\url{https://github.com/IBM/RandomWarpingSeries}}.Comment: AIStats18, Oral Paper, Add code link for generating RW
Circuit-centric quantum classifiers
The current generation of quantum computing technologies call for quantum
algorithms that require a limited number of qubits and quantum gates, and which
are robust against errors. A suitable design approach are variational circuits
where the parameters of gates are learnt, an approach that is particularly
fruitful for applications in machine learning. In this paper, we propose a
low-depth variational quantum algorithm for supervised learning. The input
feature vectors are encoded into the amplitudes of a quantum system, and a
quantum circuit of parametrised single and two-qubit gates together with a
single-qubit measurement is used to classify the inputs. This circuit
architecture ensures that the number of learnable parameters is
poly-logarithmic in the input dimension. We propose a quantum-classical
training scheme where the analytical gradients of the model can be estimated by
running several slightly adapted versions of the variational circuit. We show
with simulations that the circuit-centric quantum classifier performs well on
standard classical benchmark datasets while requiring dramatically fewer
parameters than other methods. We also evaluate sensitivity of the
classification to state preparation and parameter noise, introduce a quantum
version of dropout regularisation and provide a graphical representation of
quantum gates as highly symmetric linear layers of a neural network.Comment: 17 pages, 9 Figures, 5 Table
- …