2,817 research outputs found

    Time series classification with ensembles of elastic distance measures

    Get PDF
    Several alternative distance measures for comparing time series have recently been proposed and evaluated on time series classification (TSC) problems. These include variants of dynamic time warping (DTW), such as weighted and derivative DTW, and edit distance-based measures, including longest common subsequence, edit distance with real penalty, time warp with edit, and moveā€“splitā€“merge. These measures have the common characteristic that they operate in the time domain and compensate for potential localised misalignment through some elastic adjustment. Our aim is to experimentally test two hypotheses related to these distance measures. Firstly, we test whether there is any significant difference in accuracy for TSC problems between nearest neighbour classifiers using these distance measures. Secondly, we test whether combining these elastic distance measures through simple ensemble schemes gives significantly better accuracy. We test these hypotheses by carrying out one of the largest experimental studies ever conducted into time series classification. Our first key finding is that there is no significant difference between the elastic distance measures in terms of classification accuracy on our data sets. Our second finding, and the major contribution of this work, is to define an ensemble classifier that significantly outperforms the individual classifiers. We also demonstrate that the ensemble is more accurate than approaches not based in the time domain. Nearly all TSC papers in the data mining literature cite DTW (with warping window set through cross validation) as the benchmark for comparison. We believe that our ensemble is the first ever classifier to significantly outperform DTW and as such raises the bar for future work in this area

    Large-Scale Mapping of Human Activity using Geo-Tagged Videos

    Full text link
    This paper is the first work to perform spatio-temporal mapping of human activity using the visual content of geo-tagged videos. We utilize a recent deep-learning based video analysis framework, termed hidden two-stream networks, to recognize a range of activities in YouTube videos. This framework is efficient and can run in real time or faster which is important for recognizing events as they occur in streaming video or for reducing latency in analyzing already captured video. This is, in turn, important for using video in smart-city applications. We perform a series of experiments to show our approach is able to accurately map activities both spatially and temporally. We also demonstrate the advantages of using the visual content over the tags/titles.Comment: Accepted at ACM SIGSPATIAL 201

    On the formulation and uses of SVD-based generalized curvatures

    Get PDF
    2016 Summer.Includes bibliographical references.In this dissertation we consider the problem of computing generalized curvature values from noisy, discrete data and applications of the provided algorithms. We first establish a connection between the Frenet-Serret Frame, typically defined on an analytical curve, and the vectors from the local Singular Value Decomposition (SVD) of a discretized time-series. Next, we expand upon this connection to relate generalized curvature values, or curvatures, to a scaled ratio of singular values. Initially, the local singular value decomposition is centered on a point of the discretized time-series. This provides for an efficient computation of curvatures when the underlying curve is known. However, when the structure of the curve is not known, for example, when noise is present in the tabulated data, we propose two modifications. The first modification computes the local singular value decomposition on the mean-centered data of a windowed selection of the time-series. We observe that the mean-center version increases the stability of the curvature estimations in the presence of signal noise. The second modification is an adaptive method for selecting the size of the window, or local ball, to use for the singular value decomposition. This allows us to use a large window size when curvatures are small, which reduces the effects of noise thanks to the use of a large number of points in the SVD, and to use a small window size when curvatures are large, thereby best capturing the local curvature. Overall we observe that adapting the window size to the data, enhances the estimates of generalized curvatures. The combination of these two modifications produces a tool for computing generalized curvatures with reasonable precision and accuracy. Finally, we compare our algorithm, with and without modifications, to existing numerical curvature techniques on different types of data such as that from the Microsoft Kinect 2 sensor. To address the topic of action segmentation and recognition, a popular topic within the field of computer vision, we created a new dataset from this sensor showcasing a pose space skeletonized representation of individuals performing continuous human actions as defined by the MSRC-12 challenge. When this data is optimally projected onto a low-dimensional space, we observed each human motion lies on a distinguished line, plane, hyperplane, etc. During transitions between motions, either the dimension of the optimal subspace significantly, or the trajectory of the curve through pose space nearly reverses. We use our methods of computing generalized curvature values to identify these locations, categorized as either high curvatures or changing curvatures. The geometric characterization of the time-series allows us to segment individual,or geometrically distinct, motions. Finally, using these segments, we construct a methodology for selecting motions to conjoin for the task of action classification

    Highly comparative feature-based time-series classification

    Full text link
    A highly comparative, feature-based approach to time series classification is introduced that uses an extensive database of algorithms to extract thousands of interpretable features from time series. These features are derived from across the scientific time-series analysis literature, and include summaries of time series in terms of their correlation structure, distribution, entropy, stationarity, scaling properties, and fits to a range of time-series models. After computing thousands of features for each time series in a training set, those that are most informative of the class structure are selected using greedy forward feature selection with a linear classifier. The resulting feature-based classifiers automatically learn the differences between classes using a reduced number of time-series properties, and circumvent the need to calculate distances between time series. Representing time series in this way results in orders of magnitude of dimensionality reduction, allowing the method to perform well on very large datasets containing long time series or time series of different lengths. For many of the datasets studied, classification performance exceeded that of conventional instance-based classifiers, including one nearest neighbor classifiers using Euclidean distances and dynamic time warping and, most importantly, the features selected provide an understanding of the properties of the dataset, insight that can guide further scientific investigation

    Context-aware Synthesis for Video Frame Interpolation

    Get PDF
    Video frame interpolation algorithms typically estimate optical flow or its variations and then use it to guide the synthesis of an intermediate frame between two consecutive original frames. To handle challenges like occlusion, bidirectional flow between the two input frames is often estimated and used to warp and blend the input frames. However, how to effectively blend the two warped frames still remains a challenging problem. This paper presents a context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame. Specifically, we first use a pre-trained neural network to extract per-pixel contextual information for input frames. We then employ a state-of-the-art optical flow algorithm to estimate bidirectional flow between them and pre-warp both input frames and their context maps. Finally, unlike common approaches that blend the pre-warped frames, our method feeds them and their context maps to a video frame synthesis neural network to produce the interpolated frame in a context-aware fashion. Our neural network is fully convolutional and is trained end to end. Our experiments show that our method can handle challenging scenarios such as occlusion and large motion and outperforms representative state-of-the-art approaches.Comment: CVPR 2018, http://graphics.cs.pdx.edu/project/ctxsy

    Outlier detection techniques for wireless sensor networks: A survey

    Get PDF
    In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree
    • ā€¦
    corecore