59 research outputs found

    Time series classification with ensembles of elastic distance measures

    Get PDF
    Several alternative distance measures for comparing time series have recently been proposed and evaluated on time series classification (TSC) problems. These include variants of dynamic time warping (DTW), such as weighted and derivative DTW, and edit distance-based measures, including longest common subsequence, edit distance with real penalty, time warp with edit, and move–split–merge. These measures have the common characteristic that they operate in the time domain and compensate for potential localised misalignment through some elastic adjustment. Our aim is to experimentally test two hypotheses related to these distance measures. Firstly, we test whether there is any significant difference in accuracy for TSC problems between nearest neighbour classifiers using these distance measures. Secondly, we test whether combining these elastic distance measures through simple ensemble schemes gives significantly better accuracy. We test these hypotheses by carrying out one of the largest experimental studies ever conducted into time series classification. Our first key finding is that there is no significant difference between the elastic distance measures in terms of classification accuracy on our data sets. Our second finding, and the major contribution of this work, is to define an ensemble classifier that significantly outperforms the individual classifiers. We also demonstrate that the ensemble is more accurate than approaches not based in the time domain. Nearly all TSC papers in the data mining literature cite DTW (with warping window set through cross validation) as the benchmark for comparison. We believe that our ensemble is the first ever classifier to significantly outperform DTW and as such raises the bar for future work in this area

    DSCo-NG: A Practical Language Modeling Approach for Time Series Classification

    Get PDF
    The abundance of time series data in various domains and their high dimensionality characteristic are challenging for harvesting useful information from them. To tackle storage and processing challenges, compression-based techniques have been proposed. Our previous work, Domain Series Corpus (DSCo), compresses time series into symbolic strings and takes advantage of language modeling techniques to extract from the training set knowledge about different classes. However, this approach was flawed in practice due to its excessive memory usage and the need for a priori knowledge about the dataset. In this paper we propose DSCo-NG, which reduces DSCo’s complexity and offers an efficient (linear time complexity and low memory footprint), accurate (performance comparable to approaches working on uncompressed data) and generic (so that it can be applied to various domains) approach for time series classification. Our confidence is backed with extensive experimental evaluation against publicly accessible datasets, which also offers insights on when DSCo-NG can be a better choice than others

    Self-labeling techniques for semi-supervised time series classification: an empirical study

    Get PDF
    An increasing amount of unlabeled time series data available render the semi-supervised paradigm a suitable approach to tackle classification problems with a reduced quantity of labeled data. Self-labeled techniques stand out from semi-supervised classification methods due to their simplicity and the lack of strong assumptions about the distribution of the labeled and unlabeled data. This paper addresses the relevance of these techniques in the time series classification context by means of an empirical study that compares successful self-labeled methods in conjunction with various learning schemes and dissimilarity measures. Our experiments involve 35 time series datasets with different ratios of labeled data, aiming to measure the transductive and inductive classification capabilities of the self-labeled methods studied. The results show that the nearest-neighbor rule is a robust choice for the base classifier. In addition, the amending and multi-classifier self-labeled-based approaches reveal a promising attempt to perform semi-supervised classification in the time series context

    Positive functioning inventory: initial validation of a 12-item self-report measure of well-being

    Get PDF
    Background: This paper describes the validation of the Positive Functioning Inventory (PFI-12). This is a 12-item self-report tool developed to assess a spectrum of functioning ranging from states of mental distress to states of well-being. Method: Two samples (Sample 1: N = 242, mean age = 20 years. Sample 2: N = 301, mean age = 20 years) completed self-report measures of personality and social, physical and psychological functioning. Results: Evidence is provided for internal-consistency reliability, test-retest reliability, incremental validity, and convergent and discriminant validity in relation to a number of other measures of personality, social, physical and psychological functioning. Conclusion: The tool promises to be useful to practitioners and researchers who wish to assess positive psychological functioning

    The LAGUNA design study- towards giant liquid based underground detectors for neutrino physics and astrophysics and proton decay searches

    Get PDF
    The feasibility of a next generation neutrino observatory in Europe is being considered within the LAGUNA design study. To accommodate giant neutrino detectors and shield them from cosmic rays, a new very large underground infrastructure is required. Seven potential candidate sites in different parts of Europe and at several distances from CERN are being studied: Boulby (UK), Canfranc (Spain), Fr\'ejus (France/Italy), Pyh\"asalmi (Finland), Polkowice-Sieroszowice (Poland), Slanic (Romania) and Umbria (Italy). The design study aims at the comprehensive and coordinated technical assessment of each site, at a coherent cost estimation, and at a prioritization of the sites within the summer 2010

    Measurement of the single pi(0) production rate in neutral current neutrino interactions on water

    Get PDF
    The single π0 production rate in neutral current neutrino interactions on water in a neutrino beam with a peak neutrino energy of 0.6 GeV has been measured using the PØD, one of the subdetectors of the T2K near detector. The production rate was measured for data taking periods when the PØD contained water (2.64×10(20) protons-on-target) and also periods without water (3.49×10(20) protons-on-target). A measurement of the neutral current single π0 production rate on water is made using appropriate subtraction of the production rate with water in from the rate with water out of the target region. The subtraction analysis yields 106 ± 41 ± 69 signal events where the uncertainties are statistical (stat.) and systematic (sys.) respectively. This is consistent with the prediction of 157 events from the nominal simulation. The measured to expected ratio is 0.68 ± 0.26 (stat) ± 0.44 (sys) ± 0.12 (flux). The nominal simulation uses a flux integrated cross section of 7.63×10(−39)cm(2) per nucleon with an average neutrino interaction energy of 1.3 GeV

    First Measurement of the Muon Neutrino Charged Current Single Pion Production Cross Section on Water with the T2K Near Detector

    Get PDF
    The T2K off-axis near detector, ND280, is used to make the first differential cross section measurements of muon neutrino charged current single positive pion production on a water target at energies 0.8{\sim}0.8 GeV. The differential measurements are presented as a function of muon and pion kinematics, in the restricted phase-space defined by pπ+>200p_{\pi^+}>200MeV/c, pμ>200p_{\mu^-}>200MeV/c, cosθπ+>0.3\cos \theta_{\pi^+}>0.3 and cosθμ>0.3\cos \theta_{\mu^-}>0.3. The total flux integrated νμ\nu_\mu charged current single positive pion production cross section on water in the restricted phase-space is measured to be σϕ=4.25±0.48(stat)±1.56(syst)×1040cm2/nucleon\langle\sigma\rangle_\phi=4.25\pm0.48 (\mathrm{stat})\pm1.56 (\mathrm{syst})\times10^{-40} \mathrm{cm}^{2}/\mathrm{nucleon}. The total cross section is consistent with the NEUT prediction (5.03×1040cm2/nucleon5.03\times10^{-40} \mathrm{cm}^{2}/\mathrm{nucleon}) and 2σ\sigma lower than the GENIE prediction (7.68×1040cm2/nucleon7.68\times10^{-40} \mathrm{cm}^{2}/\mathrm{nucleon}). The differential cross sections are in good agreement with the NEUT generator. The GENIE simulation reproduces well the shapes of the distributions, but over-estimates the overall cross section normalization

    Supernova neutrino burst detection with the Deep Underground Neutrino Experiment

    Get PDF
    The Deep Underground Neutrino Experiment (DUNE), a 40-kton underground liquid argon time projection chamber experiment, will be sensitive to the electron-neutrino flavor component of the burst of neutrinos expected from the next Galactic core-collapse supernova. Such an observation will bring unique insight into the astrophysics of core collapse as well as into the properties of neutrinos. The general capabilities of DUNE for neutrino detection in the relevant few- to few-tens-of-MeV neutrino energy range will be described. As an example, DUNE's ability to constrain the νe spectral parameters of the neutrino burst will be considered
    corecore