445 research outputs found
Classification of Arrhythmia by Using Deep Learning with 2-D ECG Spectral Image Representation
The electrocardiogram (ECG) is one of the most extensively employed signals
used in the diagnosis and prediction of cardiovascular diseases (CVDs). The ECG
signals can capture the heart's rhythmic irregularities, commonly known as
arrhythmias. A careful study of ECG signals is crucial for precise diagnoses of
patients' acute and chronic heart conditions. In this study, we propose a
two-dimensional (2-D) convolutional neural network (CNN) model for the
classification of ECG signals into eight classes; namely, normal beat,
premature ventricular contraction beat, paced beat, right bundle branch block
beat, left bundle branch block beat, atrial premature contraction beat,
ventricular flutter wave beat, and ventricular escape beat. The one-dimensional
ECG time series signals are transformed into 2-D spectrograms through
short-time Fourier transform. The 2-D CNN model consisting of four
convolutional layers and four pooling layers is designed for extracting robust
features from the input spectrograms. Our proposed methodology is evaluated on
a publicly available MIT-BIH arrhythmia dataset. We achieved a state-of-the-art
average classification accuracy of 99.11\%, which is better than those of
recently reported results in classifying similar types of arrhythmias. The
performance is significant in other indices as well, including sensitivity and
specificity, which indicates the success of the proposed method.Comment: 14 pages, 5 figures, accepted for future publication in Remote
Sensing MDPI Journa
Real-time robust automatic speech recognition using compact support vector machines
In the last years, support vector machines (SVMs) have shown excellent performance in many applications, especially in the presence of noise. In particular, SVMs offer several advantages over artificial neural networks (ANNs) that have attracted the attention of the speech processing community. Nevertheless, their high computational requirements prevent them from being used in practice in automatic speech recognition (ASR), where ANNs have proven to be successful. The high complexity of SVMs in this context arises from the use of huge speech training databases with millions of samples and highly overlapped classes. This paper suggests the use of a weighted least squares (WLS) training procedure that facilitates the possibility of imposing a compact semiparametric model on the SVM, which results in a dramatic complexity reduction. Such a complexity reduction with respect to conventional SVMs, which is between two and three orders of magnitude, allows the proposed hybrid WLS-SVC/HMM system to perform real-time speech decoding on a connected-digit recognition task (SpeechDat Spanish database). The experimental evaluation of the proposed system shows encouraging performance levels in clean and noisy conditions, although further improvements are required to reach the maturity level of current context-dependent HMM based recognizers.Spanish Ministry of Science and Innovation TEC 2008-06382 and TEC 2008-02473 and Comunidad Autónoma de Madrid-UC3M CCG10-UC3M/TIC-5304.Publicad
Gibbs Max-margin Topic Models with Data Augmentation
Max-margin learning is a powerful approach to building classifiers and
structured output predictors. Recent work on max-margin supervised topic models
has successfully integrated it with Bayesian topic models to discover
discriminative latent semantic structures and make accurate predictions for
unseen testing data. However, the resulting learning problems are usually hard
to solve because of the non-smoothness of the margin loss. Existing approaches
to building max-margin supervised topic models rely on an iterative procedure
to solve multiple latent SVM subproblems with additional mean-field assumptions
on the desired posterior distributions. This paper presents an alternative
approach by defining a new max-margin loss. Namely, we present Gibbs max-margin
supervised topic models, a latent variable Gibbs classifier to discover hidden
topic representations for various tasks, including classification, regression
and multi-task learning. Gibbs max-margin supervised topic models minimize an
expected margin loss, which is an upper bound of the existing margin loss
derived from an expected prediction rule. By introducing augmented variables
and integrating out the Dirichlet variables analytically by conjugacy, we
develop simple Gibbs sampling algorithms with no restricting assumptions and no
need to solve SVM subproblems. Furthermore, each step of the
"augment-and-collapse" Gibbs sampling algorithms has an analytical conditional
distribution, from which samples can be easily drawn. Experimental results
demonstrate significant improvements on time efficiency. The classification
performance is also significantly improved over competitors on binary,
multi-class and multi-label classification tasks.Comment: 35 page
Parcel loss prediction in last-mile delivery: deep and non-deep approaches with insights from Explainable AI
Within the domain of e-commerce retail, an important objective is the
reduction of parcel loss during the last-mile delivery phase. The
ever-increasing availability of data, including product, customer, and order
information, has made it possible for the application of machine learning in
parcel loss prediction. However, a significant challenge arises from the
inherent imbalance in the data, i.e., only a very low percentage of parcels are
lost. In this paper, we propose two machine learning approaches, namely, Data
Balance with Supervised Learning (DBSL) and Deep Hybrid Ensemble Learning
(DHEL), to accurately predict parcel loss. The practical implication of such
predictions is their value in aiding e-commerce retailers in optimizing
insurance-related decision-making policies. We conduct a comprehensive
evaluation of the proposed machine learning models using one year data from
Belgian shipments. The findings show that the DHEL model, which combines a
feed-forward autoencoder with a random forest, achieves the highest
classification performance. Furthermore, we use the techniques from Explainable
AI (XAI) to illustrate how prediction models can be used in enhancing business
processes and augmenting the overall value proposition for e-commerce retailers
in the last mile delivery
Learning from small and imbalanced dataset of images using generative adversarial neural networks.
The performance of deep learning models is unmatched by any other approach in supervised computer vision tasks such as image classification. However, training these models requires a lot of labeled data, which are not always available. Labelling a massive dataset is largely a manual and very demanding process. Thus, this problem has led to the development of techniques that bypass the need for labelling at scale. Despite this, existing techniques such as transfer learning, data augmentation and semi-supervised learning have not lived up to expectations. Some of these techniques do not account for other classification challenges, such as a class-imbalance problem. Thus, these techniques mostly underperform when compared with fully supervised approaches. In this thesis, we propose new methods to train a deep model on image classification with a limited number of labeled examples. This was achieved by extending state-of-the-art generative adversarial networks with multiple fake classes and network switchers. These new features enabled us to train a classifier using large unlabeled data, while generating class specific samples. The proposed model is label agnostic and is suitable for different classification scenarios, ranging from weakly supervised to fully supervised settings. This was used to address classification challenges with limited labeled data and a class-imbalance problem. Extensive experiments were carried out on different benchmark datasets. Firstly, the proposed approach was used to train a classification model and our findings indicated that the proposed approach achieved better classification accuracies, especially when the number of labeled samples is small. Secondly, the proposed approach was able to generate high-quality samples from class-imbalance datasets. The samples' quality is evident in improved classification performances when generated samples were used in neutralising class-imbalance. The results are thoroughly analyzed and, overall, our method showed superior performances over popular resampling technique and the AC-GAN model. Finally, we successfully applied the proposed approach as a new augmentation technique to two challenging real-world problems: face with attributes and legacy engineering drawings. The results obtained demonstrate that the proposed approach is effective even in extreme cases
Monocular Camera Viewpoint-Invariant Vehicular Traffic Segmentation and Classification Utilizing Small Datasets
The work presented here develops a computer vision framework that is view angle independent for vehicle segmentation and classification from roadway traffic systems installed by the Virginia Department of Transportation (VDOT). An automated technique for extracting a region of interest is discussed to speed up the processing. The VDOT traffic videos are analyzed for vehicle segmentation using an improved robust low-rank matrix decomposition technique. It presents a new and effective thresholding method that improves segmentation accuracy and simultaneously speeds up the segmentation processing. Size and shape physical descriptors from morphological properties and textural features from the Histogram of Oriented Gradients (HOG) are extracted from the segmented traffic. Furthermore, a multi-class support vector machine classifier is employed to categorize different traffic vehicle types, including passenger cars, passenger trucks, motorcycles, buses, and small and large utility trucks. It handles multiple vehicle detections through an iterative k-means clustering over-segmentation process. The proposed algorithm reduced the processed data by an average of 40%. Compared to recent techniques, it showed an average improvement of 15% in segmentation accuracy, and it is 55% faster than the compared segmentation techniques on average. Moreover, a comparative analysis of 23 different deep learning architectures is presented. The resulting algorithm outperformed the compared deep learning algorithms for the quality of vehicle classification accuracy. Furthermore, the timing analysis showed that it could operate in real-time scenarios
- …