2,388 research outputs found
KCRC-LCD: Discriminative Kernel Collaborative Representation with Locality Constrained Dictionary for Visual Categorization
We consider the image classification problem via kernel collaborative
representation classification with locality constrained dictionary (KCRC-LCD).
Specifically, we propose a kernel collaborative representation classification
(KCRC) approach in which kernel method is used to improve the discrimination
ability of collaborative representation classification (CRC). We then measure
the similarities between the query and atoms in the global dictionary in order
to construct a locality constrained dictionary (LCD) for KCRC. In addition, we
discuss several similarity measure approaches in LCD and further present a
simple yet effective unified similarity measure whose superiority is validated
in experiments. There are several appealing aspects associated with LCD. First,
LCD can be nicely incorporated under the framework of KCRC. The LCD similarity
measure can be kernelized under KCRC, which theoretically links CRC and LCD
under the kernel method. Second, KCRC-LCD becomes more scalable to both the
training set size and the feature dimension. Example shows that KCRC is able to
perfectly classify data with certain distribution, while conventional CRC fails
completely. Comprehensive experiments on many public datasets also show that
KCRC-LCD is a robust discriminative classifier with both excellent performance
and good scalability, being comparable or outperforming many other
state-of-the-art approaches
A tale of two toolkits, report the third: on the usage and performance of HIVE-COTE v1.0
The Hierarchical Vote Collective of Transformation-based Ensembles
(HIVE-COTE) is a heterogeneous meta ensemble for time series classification.
Since it was first proposed in 2016, the algorithm has undergone some minor
changes and there is now a configurable, scalable and easy to use version
available in two open source repositories. We present an overview of the latest
stable HIVE-COTE, version 1.0, and describe how it differs to the original. We
provide a walkthrough guide of how to use the classifier, and conduct extensive
experimental evaluation of its predictive performance and resource usage. We
compare the performance of HIVE-COTE to three recently proposed algorithms
Automatic Feature Engineering for Time Series Classification: Evaluation and Discussion
Time Series Classification (TSC) has received much attention in the past two
decades and is still a crucial and challenging problem in data science and
knowledge engineering. Indeed, along with the increasing availability of time
series data, many TSC algorithms have been suggested by the research community
in the literature. Besides state-of-the-art methods based on similarity
measures, intervals, shapelets, dictionaries, deep learning methods or hybrid
ensemble methods, several tools for extracting unsupervised informative summary
statistics, aka features, from time series have been designed in the recent
years. Originally designed for descriptive analysis and visualization of time
series with informative and interpretable features, very few of these feature
engineering tools have been benchmarked for TSC problems and compared with
state-of-the-art TSC algorithms in terms of predictive performance. In this
article, we aim at filling this gap and propose a simple TSC process to
evaluate the potential predictive performance of the feature sets obtained with
existing feature engineering tools. Thus, we present an empirical study of 11
feature engineering tools branched with 9 supervised classifiers over 112 time
series data sets. The analysis of the results of more than 10000 learning
experiments indicate that feature-based methods perform as accurately as
current state-of-the-art TSC algorithms, and thus should rightfully be
considered further in the TSC literature
MultiRocket: Multiple pooling operators and transformations for fast and effective time series classification
We propose MultiRocket, a fast time series classification (TSC) algorithm
that achieves state-of-the-art performance with a tiny fraction of the time and
without the complex ensembling structure of many state-of-the-art methods.
MultiRocket improves on MiniRocket, one of the fastest TSC algorithms to date,
by adding multiple pooling operators and transformations to improve the
diversity of the features generated. In addition to processing the raw input
series, MultiRocket also applies first order differences to transform the
original series. Convolutions are applied to both representations, and four
pooling operators are applied to the convolution outputs. When benchmarked
using the University of California Riverside TSC benchmark datasets,
MultiRocket is significantly more accurate than MiniRocket, and competitive
with the best ranked current method in terms of accuracy, HIVE-COTE 2.0, while
being orders of magnitude faster
- …