2,210 research outputs found
When Kernel Methods meet Feature Learning: Log-Covariance Network for Action Recognition from Skeletal Data
Human action recognition from skeletal data is a hot research topic and
important in many open domain applications of computer vision, thanks to
recently introduced 3D sensors. In the literature, naive methods simply
transfer off-the-shelf techniques from video to the skeletal representation.
However, the current state-of-the-art is contended between to different
paradigms: kernel-based methods and feature learning with (recurrent) neural
networks. Both approaches show strong performances, yet they exhibit heavy, but
complementary, drawbacks. Motivated by this fact, our work aims at combining
together the best of the two paradigms, by proposing an approach where a
shallow network is fed with a covariance representation. Our intuition is that,
as long as the dynamics is effectively modeled, there is no need for the
classification network to be deep nor recurrent in order to score favorably. We
validate this hypothesis in a broad experimental analysis over 6 publicly
available datasets.Comment: 2017 IEEE Computer Vision and Pattern Recognition (CVPR) Workshop
Multi-Sensor Event Detection using Shape Histograms
Vehicular sensor data consists of multiple time-series arising from a number
of sensors. Using such multi-sensor data we would like to detect occurrences of
specific events that vehicles encounter, e.g., corresponding to particular
maneuvers that a vehicle makes or conditions that it encounters. Events are
characterized by similar waveform patterns re-appearing within one or more
sensors. Further such patterns can be of variable duration. In this work, we
propose a method for detecting such events in time-series data using a novel
feature descriptor motivated by similar ideas in image processing. We define
the shape histogram: a constant dimension descriptor that nevertheless captures
patterns of variable duration. We demonstrate the efficacy of using shape
histograms as features to detect events in an SVM-based, multi-sensor,
supervised learning scenario, i.e., multiple time-series are used to detect an
event. We present results on real-life vehicular sensor data and show that our
technique performs better than available pattern detection implementations on
our data, and that it can also be used to combine features from multiple
sensors resulting in better accuracy than using any single sensor. Since
previous work on pattern detection in time-series has been in the single series
context, we also present results using our technique on multiple standard
time-series datasets and show that it is the most versatile in terms of how it
ranks compared to other published results
Deep Embedding Kernel
Kernel methods and deep learning are two major branches of machine learning that have achieved numerous successes in both analytics and artificial intelligence. While having their own unique characteristics, both branches work through mapping data to a feature space that is supposedly more favorable towards the given task. This dissertation addresses the strengths and weaknesses of each mapping method through combining them and forming a family of novel deep architectures that center around the Deep Embedding Kernel (DEK). In short, DEK is a realization of a kernel function through a newly deep architecture. The mapping in DEK is both implicit (like in kernel methods) and learnable (like in deep learning). Prior to DEK, we proposed a less advanced architecture called Deep Kernel for the tasks of classification and visualization. More recently, we integrate DEK with the novel Dual Deep Learning framework to model big unstructured data. Using DEK as a core component, we further propose two machine learning models: Deep Similarity-Enhanced K Nearest Neighbors (DSE-KNN) and Recurrent Embedding Kernel (REK). Both models have their mappings trained towards optimizing data instances\u27 neighborhoods in the feature space. REK is specifically designed for time series data. Experimental studies throughout the dissertation show that the proposed models have competitive performance to other commonly used and state-of-the-art machine learning models in their given tasks
An Evaluation of Machine Learning and Deep Learning Models for Drought Prediction using Weather Data
Drought is a serious natural disaster that has a long duration and a wide
range of influence. To decrease the drought-caused losses, drought prediction
is the basis of making the corresponding drought prevention and disaster
reduction measures. While this problem has been studied in the literature, it
remains unknown whether drought can be precisely predicted or not with machine
learning models using weather data. To answer this question, a real-world
public dataset is leveraged in this study and different drought levels are
predicted using the last 90 days of 18 meteorological indicators as the
predictors. In a comprehensive approach, 16 machine learning models and 16 deep
learning models are evaluated and compared. The results show no single model
can achieve the best performance for all evaluation metrics simultaneously,
which indicates the drought prediction problem is still challenging. As
benchmarks for further studies, the code and results are publicly available in
a Github repository.Comment: Github link:
https://github.com/jwwthu/DL4Climate/tree/main/DroughtPredictio
Classification of Occluded Objects using Fast Recurrent Processing
Recurrent neural networks are powerful tools for handling incomplete data
problems in computer vision, thanks to their significant generative
capabilities. However, the computational demand for these algorithms is too
high to work in real time, without specialized hardware or software solutions.
In this paper, we propose a framework for augmenting recurrent processing
capabilities into a feedforward network without sacrificing much from
computational efficiency. We assume a mixture model and generate samples of the
last hidden layer according to the class decisions of the output layer, modify
the hidden layer activity using the samples, and propagate to lower layers. For
visual occlusion problem, the iterative procedure emulates feedforward-feedback
loop, filling-in the missing hidden layer activity with meaningful
representations. The proposed algorithm is tested on a widely used dataset, and
shown to achieve 2 improvement in classification accuracy for occluded
objects. When compared to Restricted Boltzmann Machines, our algorithm shows
superior performance for occluded object classification.Comment: arXiv admin note: text overlap with arXiv:1409.8576 by other author
Building Program Vector Representations for Deep Learning
Deep learning has made significant breakthroughs in various fields of
artificial intelligence. Advantages of deep learning include the ability to
capture highly complicated features, weak involvement of human engineering,
etc. However, it is still virtually impossible to use deep learning to analyze
programs since deep architectures cannot be trained effectively with pure back
propagation. In this pioneering paper, we propose the "coding criterion" to
build program vector representations, which are the premise of deep learning
for program analysis. Our representation learning approach directly makes deep
learning a reality in this new field. We evaluate the learned vector
representations both qualitatively and quantitatively. We conclude, based on
the experiments, the coding criterion is successful in building program
representations. To evaluate whether deep learning is beneficial for program
analysis, we feed the representations to deep neural networks, and achieve
higher accuracy in the program classification task than "shallow" methods, such
as logistic regression and the support vector machine. This result confirms the
feasibility of deep learning to analyze programs. It also gives primary
evidence of its success in this new field. We believe deep learning will become
an outstanding technique for program analysis in the near future.Comment: This paper was submitted to ICSE'1
European exchange trading funds trading with locally weighted support vector regression
In this paper, two different Locally Weighted Support Vector Regression (wSVR) algorithms are generated and applied to the task of forecasting and trading five European Exchange Traded Funds. The trading application covers the recent European Monetary Union debt crisis. The performance of the proposed models is benchmarked against traditional Support Vector Regression (SVR) models. The Radial Basis Function, the Wavelet and the Mahalanobis kernel are explored and tested as SVR kernels. Finally, a novel statistical SVR input selection procedure is introduced based on a principal component analysis and the Hansen, Lunde, and Nason (2011) model confidence test. The results demonstrate the superiority of the wSVR models over the traditional SVRs and of the v-SVR over the ε-SVR algorithms. We note that the performance of all models varies and considerably deteriorates in the peak of the debt crisis. In terms of the kernels, our results do not confirm the belief that the Radial Basis Function is the optimum choice for financial series
Machine learning and deep learning performance in classifying dyslexic children’s electroencephalogram during writing
Dyslexia is a form of learning disability that causes a child to have difficulties in writing alphabets, reading words, and doing mathematics. Early identification of dyslexia is important to provide early intervention to improve learning disabilities. This study was carried out to differentiate EEG signals of poor dyslexic, capable dyslexic, and normal children during writing using machine learning and deep learning. three machine learning algorithms were studied: k-nearest neighbors (KNN), support vector machine (SVM), and extreme learning machine (ELM) with input features from coefficients of beta and theta band power extracted using discrete wavelet transform (DWT). As for the deep learning (DL) algorithm, long short-term memory (LSTM) architecture was employed. The kernel parameters of the classifiers were optimized to achieve high classification accuracy. Results showed that db8 achieved the greatest classification accuracy for all classifiers. Support vector machine with radial basis function kernel yields the highest accuracy which is 88% than other classifiers. The support vector machine with radial basis function kernel with db8 could be employed in determining the dyslexic children’s levels objectively during writing
Data Mining Methods Applied to a Digital Forensics Task for Supervised Machine Learning
Digital forensics research includes several stages. Once we have collected the data the last goal is to obtain a model in order to predict the output with unseen data. We focus on supervised machine learning techniques. This chapter performs an experimental study on a forensics data task for multi-class classification including several types of methods such as decision trees, bayes classifiers, based on rules, artificial neural networks and based on nearest neighbors. The classifiers have been evaluated with two performance measures: accuracy and Cohen’s kappa. The followed experimental design has been a 4-fold cross validation with thirty repetitions for non-deterministic algorithms in order to obtain reliable results, averaging the results from 120 runs. A statistical analysis has been conducted in order to compare each pair of algorithms by means of t-tests using both the accuracy and Cohen’s kappa metrics
- …