2,228 research outputs found

    Time Series Data Mining Algorithms for Identifying Short RNA in Arabidopsis thaliana

    Get PDF
    The class of molecules called short RNAs (sRNAs) are known to play a key role in gene regulation. Th are typically sequences of nucleotides between 21-25 nucleotides in length. They are known to play a key role in gene regulation. The identification, clustering and classification of sRNA has recently become the focus of much research activity. The basic problem involves detecting regions of interest on the chromosome where the pattern of candidate matches is somehow unusual. Currently, there are no published algorithms for detecting regions of interest, and the unpublished methods that we are aware of involve bespoke rule based systems designed for a specific organism. Work in this very new field has understandably focused on the outcomes rather than the methods used to obtain the results. In this paper we propose two generic approaches that place the specific biological problem in the wider context of time series data mining problems. Both methods are based on treating the occurrences on a chromosome, or “hit count” data, as a time series, then running a sliding window along a chromosome and measuring unusualness. This formulation means we can treat finding unusual areas of candidate RNA activity as a variety of time series anomaly detection problem. The first set of approaches is model based. We specify a null hypothesis distribution for not being a sRNA, then estimate the p-values along the chromosome. The second approach is instance based. We identify some typical shapes from known sRNA, then use dynamic time warping and fourier trans-form based distance to measure how closely the candidate series matches. We demonstrate that these methods can find known sRNA on Arabidopsis thaliana chromosomes and illustrate the benefits of the added information provided by these algorithms

    hITeQ: A new workflow-based computing environment for streamlining discovery. Application in materials science

    Full text link
    [EN] This paper presents the implementation of the recent methodology called Adaptable Time Warping (ATW) for the automatic identification of mixture of crystallographic phases from powder X-ray diffraction data, inside the framework of a new integrative platform named hITeQ. The methodology is encapsulated into a so-called workflow, and we explore the benefits of such an environment for streamlining discovery in R&D. Beside the fact that ATW successfully identifies and classifies crystalline phases from powder XRD for the very complicated case of zeolite ITQ-33 for which has been employed a high throughput synthesis process, we stress on the numerous difficulties encountered by academic laboratories and companies when facing the integration of new software or techniques. It is shown how an integrative approach provides a real asset in terms of cost, efficiency, and speed due to a unique environment that supports well-defined and reusable processes, improves knowledge management, and handles properly multi-disciplinary teamwork, and disparate data structures and protocols.EU Commission FP6 (TOPCOMBI Project) is gratefully acknowledged.Baumes, LA.; Jiménez Serrano, S.; Corma Canós, A. (2011). hITeQ: A new workflow-based computing environment for streamlining discovery. Application in materials science. Catalysis Today. 159(1):126-137. doi:10.1016/j.cattod.2010.03.067S126137159

    Learning Frame Similarity using Siamese networks for Audio-to-Score Alignment

    Get PDF
    Audio-to-score alignment aims at generating an accurate mapping between a performance audio and the score of a given piece. Standard alignment methods are based on Dynamic Time Warping (DTW) and employ handcrafted features, which cannot be adapted to different acoustic conditions. We propose a method to overcome this limitation using learned frame similarity for audio-to-score alignment. We focus on offline audio- to-score alignment of piano music. Experiments on music data from different acoustic conditions demonstrate that our method achieves higher alignment accuracy than a standard DTW-based method that uses handcrafted features, and generates robust alignments whilst being adaptable to different domains at the same time

    Predictive Modelling of Bone Age through Classification and Regression of Bone Shapes

    Get PDF
    Bone age assessment is a task performed daily in hospitals worldwide. This involves a clinician estimating the age of a patient from a radiograph of the non-dominant hand. Our approach to automated bone age assessment is to modularise the algorithm into the following three stages: segment and verify hand outline; segment and verify bones; use the bone outlines to construct models of age. In this paper we address the final question: given outlines of bones, can we learn how to predict the bone age of the patient? We examine two alternative approaches. Firstly, we attempt to train classifiers on individual bones to predict the bone stage categories commonly used in bone ageing. Secondly, we construct regression models to directly predict patient age. We demonstrate that models built on summary features of the bone outline perform better than those built using the one dimensional representation of the outline, and also do at least as well as other automated systems. We show that models constructed on just three bones are as accurate at predicting age as expert human assessors using the standard technique. We also demonstrate the utility of the model by quantifying the importance of ethnicity and sex on age development. Our conclusion is that the feature based system of separating the image processing from the age modelling is the best approach for automated bone ageing, since it offers flexibility and transparency and produces accurate estimate

    Optimizing Dynamic Time Warping’s Window Width for Time Series Data Mining Applications

    Get PDF
    Dynamic Time Warping (DTW) is a highly competitive distance measure for most time series data mining problems. Obtaining the best performance from DTW requires setting its only parameter, the maximum amount of warping (w). In the supervised case with ample data, w is typically set by cross-validation in the training stage. However, this method is likely to yield suboptimal results for small training sets. For the unsupervised case, learning via cross-validation is not possible because we do not have access to labeled data. Many practitioners have thus resorted to assuming that “the larger the better”, and they use the largest value of w permitted by the computational resources. However, as we will show, in most circumstances, this is a naïve approach that produces inferior clusterings. Moreover, the best warping window width is generally non-transferable between the two tasks, i.e., for a single dataset, practitioners cannot simply apply the best w learned for classification on clustering or vice versa. In addition, we will demonstrate that the appropriate amount of warping not only depends on the data structure, but also on the dataset size. Thus, even if a practitioner knows the best setting for a given dataset, they will likely be at a lost if they apply that setting on a bigger size version of that data. All these issues seem largely unknown or at least unappreciated in the community. In this work, we demonstrate the importance of setting DTW’s warping window width correctly, and we also propose novel methods to learn this parameter in both supervised and unsupervised settings. The algorithms we propose to learn w can produce significant improvements in classification accuracy and clustering quality. We demonstrate the correctness of our novel observations and the utility of our ideas by testing them with more than one hundred publicly available datasets. Our forceful results allow us to make a perhaps unexpected claim; an underappreciated “low hanging fruit” in optimizing DTW’s performance can produce improvements that make it an even stronger baseline, closing most or all the improvement gap of the more sophisticated methods proposed in recent years

    Applications of high-frequency telematics for driving behavior analysis

    Get PDF
    A thesis submitted in partial fulfillment of the requirements for the degree of Doctor in Information Management, specialization in Statistics and EconometricsProcessing driving data and investigating driving behavior has been receiving an increasing interest in the last decades, with applications ranging from car insurance pricing to policy-making. A popular way of analyzing driving behavior is to move the focus to the maneuvers as they give useful information about the driver who is performing them. Previous research on maneuver detection can be divided into two strategies, namely, 1) using fixed thresholds in inertial measurements to define the start and end of specific maneuvers or 2) using features extracted from rolling windows of sensor data in a supervised learning model to detect maneuvers. While the first strategy is not adaptable and requires fine-tuning, the second needs a dataset with labels (which is time-consuming) and cannot identify maneuvers with different lengths in time. To tackle these shortcomings, we investigate a new way of identifying maneuvers from vehicle telematics data, through motif detection in time-series. Using a publicly available naturalistic driving dataset (the UAH-DriveSet), we conclude that motif detection algorithms are not only capable of extracting simple maneuvers such as accelerations, brakes, and turns, but also more complex maneuvers, such as lane changes and overtaking maneuvers, thus validating motif discovery as a worthwhile line for future research in driving behavior. We also propose TripMD, a system that extracts the most relevant driving patterns from sensor recordings (such as acceleration) and provides a visualization that allows for an easy investigation. We test TripMD in the same UAH-DriveSet dataset and show that (1) our system can extract a rich number of driving patterns from a single driver that are meaningful to understand driving behaviors and (2) our system can be used to identify the driving behavior of an unknown driver from a set of drivers whose behavior we know.Nas últimas décadas, o processamento e análise de dados de condução tem recebido um interesse cada vez maior, com aplicações que abrangem a área de seguros de automóveis até a atea de regulação. Tipicamente, a análise de condução compreende a extração e estudo de manobras uma vez que estas contêm informação relevante sobre a performance do condutor. A investigação prévia sobre este tema pode ser dividida em dois tipos de estratégias, a saber, 1) o uso de valores fixos de aceleração para definir o início e fim de cada manobra ou 2) a utilização de modelos de aprendizagem supervisionada em janelas temporais. Enquanto o primeiro tipo de estratégias é inflexível e requer afinação dos parâmetros, o segundo precisa de dados de condução anotados (o que é moroso) e não é capaz de identificar manobras de diferentes durações. De forma a mitigar estas lacunas, neste trabalho, aplicamos métodos desenvolvidos na área de investigação de séries temporais de forma a resolver o problema de deteção de manobras. Em particular, exploramos área de deteção de motifs em séries temporais e testamos se estes métodos genéricos são bem-sucedidos na deteção de manobras. Também propomos o TripMD, um sistema que extrai os padrões de condução mais relevantes de um conjuntos de viagens e fornece uma simples visualização. TripMD é testado num conjunto de dados públicos (o UAH-DriveSet), do qual concluímos que (1) o nosso sistema é capaz de extrair padrões de condução/manobras de um único condutor que estão relacionados com o perfil de condução do condutor em questão e (2) o nosso sistema pode ser usado para identificar o perfil de condução de um condutor desconhecido de um conjunto de condutores cujo comportamento nos é conhecido
    corecore