93 research outputs found
Automatic alignment of surgical videos using kinematic data
Over the past one hundred years, the classic teaching methodology of "see
one, do one, teach one" has governed the surgical education systems worldwide.
With the advent of Operation Room 2.0, recording video, kinematic and many
other types of data during the surgery became an easy task, thus allowing
artificial intelligence systems to be deployed and used in surgical and medical
practice. Recently, surgical videos has been shown to provide a structure for
peer coaching enabling novice trainees to learn from experienced surgeons by
replaying those videos. However, the high inter-operator variability in
surgical gesture duration and execution renders learning from comparing novice
to expert surgical videos a very difficult task. In this paper, we propose a
novel technique to align multiple videos based on the alignment of their
corresponding kinematic multivariate time series data. By leveraging the
Dynamic Time Warping measure, our algorithm synchronizes a set of videos in
order to show the same gesture being performed at different speed. We believe
that the proposed approach is a valuable addition to the existing learning
tools for surgery.Comment: Accepted at AIME 201
Improving the Scalability of a Prosumer Cooperative Game with K-Means Clustering
Among the various market structures under peer-to-peer energy sharing, one
model based on cooperative game theory provides clear incentives for prosumers
to collaboratively schedule their energy resources. The computational
complexity of this model, however, increases exponentially with the number of
participants. To address this issue, this paper proposes the application of
K-means clustering to the energy profiles following the grand coalition
optimization. The cooperative model is run with the "clustered players" to
compute their payoff allocations, which are then further distributed among the
prosumers within each cluster. Case studies show that the proposed method can
significantly improve the scalability of the cooperative scheme while
maintaining a high level of financial incentives for the prosumers.Comment: 6 pages, 4 figures, 2 tables. Accepted to the 13th IEEE PES PowerTech
Conference, 23-27 June 2019, Milano, Ital
Recommended from our members
Application of DTW Barycenter Averaging to Finding EEG Consensus Sequences
DTW Barycenter Averaging (DBA) has proven to be a useful tool for calculating consensus sequences, but it has not yet been applied to real electroencephalography (EEG) data. This study tests DBA on real EEG sequences using the modification proposed by Kotas et al. (2015), and proposes several further modifications to the initial sequence selection process to improve the method’s efficacy for EEG analysis. Errors in peak latency and peak amplitude measures for a single EEG component, namely N250, were measured to test each method. Three of the proposed DBA variations produced consensus sequences that were significantly more accurate for replicating features of single-trial N250 components than the widely-used Event-Related Potential (ERP) technique. Potential implications include the uncovering of previously obscured effects in EEG data and providing more accurate descriptions of the prototypical electrophysiological responses to external events
An Agglomerative Hierarchical Clustering with Various Distance Measurements for Ground Level Ozone Clustering in Putrajaya, Malaysia
Ground level ozone is one of the common pollution issues that has a negative influence on human health. The key characteristic behind ozone level analysis lies on the complex representation of such data which can be shown by time series. Clustering is one of the common techniques that have been used for time series metrological and environmental data. The way that clustering technique groups the similar sequences relies on a distance or similarity criteria. Several distance measures have been integrated with various types of clustering techniques. However, identifying an appropriate distance measure for a particular field is a challenging task. Since the hierarchical clustering has been considered as the state of the art for metrological and climate change data, this paper proposes an agglomerative hierarchical clustering for ozone level analysis in Putrajaya, Malaysia using three distance measures i.e. Euclidean, Minkowski and Dynamic Time Warping. Results shows that Dynamic Time Warping has outperformed the other two distance measures
Transfer learning for time series classification
Transfer learning for deep neural networks is the process of first training a
base network on a source dataset, and then transferring the learned features
(the network's weights) to a second network to be trained on a target dataset.
This idea has been shown to improve deep neural network's generalization
capabilities in many computer vision tasks such as image recognition and object
localization. Apart from these applications, deep Convolutional Neural Networks
(CNNs) have also recently gained popularity in the Time Series Classification
(TSC) community. However, unlike for image recognition problems, transfer
learning techniques have not yet been investigated thoroughly for the TSC task.
This is surprising as the accuracy of deep learning models for TSC could
potentially be improved if the model is fine-tuned from a pre-trained neural
network instead of training it from scratch. In this paper, we fill this gap by
investigating how to transfer deep CNNs for the TSC task. To evaluate the
potential of transfer learning, we performed extensive experiments using the
UCR archive which is the largest publicly available TSC benchmark containing 85
datasets. For each dataset in the archive, we pre-trained a model and then
fine-tuned it on the other datasets resulting in 7140 different deep neural
networks. These experiments revealed that transfer learning can improve or
degrade the model's predictions depending on the dataset used for transfer.
Therefore, in an effort to predict the best source dataset for a given target
dataset, we propose a new method relying on Dynamic Time Warping to measure
inter-datasets similarities. We describe how our method can guide the transfer
to choose the best source dataset leading to an improvement in accuracy on 71
out of 85 datasets.Comment: Accepted at IEEE International Conference on Big Data 201
Advances in Hyperspectral Image Classification: Earth monitoring with statistical learning methods
Hyperspectral images show similar statistical properties to natural grayscale
or color photographic images. However, the classification of hyperspectral
images is more challenging because of the very high dimensionality of the
pixels and the small number of labeled examples typically available for
learning. These peculiarities lead to particular signal processing problems,
mainly characterized by indetermination and complex manifolds. The framework of
statistical learning has gained popularity in the last decade. New methods have
been presented to account for the spatial homogeneity of images, to include
user's interaction via active learning, to take advantage of the manifold
structure with semisupervised learning, to extract and encode invariances, or
to adapt classifiers and image representations to unseen yet similar scenes.
This tutuorial reviews the main advances for hyperspectral remote sensing image
classification through illustrative examples.Comment: IEEE Signal Processing Magazine, 201
- …