467 research outputs found

    Counting Causal Paths in Big Times Series Data on Networks

    Full text link
    Graph or network representations are an important foundation for data mining and machine learning tasks in relational data. Many tools of network analysis, like centrality measures, information ranking, or cluster detection rest on the assumption that links capture direct influence, and that paths represent possible indirect influence. This assumption is invalidated in time-stamped network data capturing, e.g., dynamic social networks, biological sequences or financial transactions. In such data, for two time-stamped links (A,B) and (B,C) the chronological ordering and timing determines whether a causal path from node A via B to C exists. A number of works has shown that for that reason network analysis cannot be directly applied to time-stamped network data. Existing methods to address this issue require statistics on causal paths, which is computationally challenging for big data sets. Addressing this problem, we develop an efficient algorithm to count causal paths in time-stamped network data. Applying it to empirical data, we show that our method is more efficient than a baseline method implemented in an OpenSource data analytics package. Our method works efficiently for different values of the maximum time difference between consecutive links of a causal path and supports streaming scenarios. With it, we are closing a gap that hinders an efficient analysis of big time series data on complex networks.Comment: 10 pages, 2 figure

    From heuristics-based to data-driven audio melody extraction

    Get PDF
    The identification of the melody from a music recording is a relatively easy task for humans, but very challenging for computational systems. This task is known as "audio melody extraction", more formally defined as the automatic estimation of the pitch sequence of the melody directly from the audio signal of a polyphonic music recording. This thesis investigates the benefits of exploiting knowledge automatically derived from data for audio melody extraction, by combining digital signal processing and machine learning methods. We extend the scope of melody extraction research by working with a varied dataset and multiple definitions of melody. We first present an overview of the state of the art, and perform an evaluation focused on a novel symphonic music dataset. We then propose melody extraction methods based on a source-filter model and pitch contour characterisation and evaluate them on a wide range of music genres. Finally, we explore novel timbre, tonal and spatial features for contour characterisation, and propose a method for estimating multiple melodic lines. The combination of supervised and unsupervised approaches leads to advancements on melody extraction and shows a promising path for future research and applications

    Predefined pattern detection in large time series

    Get PDF
    Predefined pattern detection from time series is an interesting and challenging task. In order to reduce its computational cost and increase effectiveness, a number of time series representation methods and similarity measures have been proposed. Most of the existing methods focus on full sequence matching, that is, sequences with clearly defined beginnings and endings, where all data points contribute to the match. These methods, however, do not account for temporal and magnitude deformations in the data and result to be ineffective on several real-world scenarios where noise and external phenomena introduce diversity in the class of patterns to be matched. In this paper, we present a novel pattern detection method, which is based on the notions of templates, landmarks, constraints and trust regions. We employ the Minimum Description Length (MDL) principle for time series preprocessing step, which helps to preserve all the prominent features and prevents the template from overfitting. Templates are provided by common users or domain experts, and represent interesting patterns we want to detect from time series. Instead of utilising templates to match all the potential subsequences in the time series, we translate the time series and templates into landmark sequences, and detect patterns from landmark sequence of the time series. Through defining constraints within the template landmark sequence, we effectively extract all the landmark subsequences from the time series landmark sequence, and obtain a number of landmark segments (time series subsequences or instances). We model each landmark segment through scaling the template in both temporal and magnitude dimensions. To suppress the influence of noise, we introduce the concept oftrust region, which not only helps to achieve an improved instance model, but also helps to catch the accurate boundaries of instances of the given template. Based on the similarities derived from instance models, we introduce the probability density function to calculate a similarity threshold. The threshold can be used to judge if a landmark segment is a true instance of the given template or not. To evaluate the effectiveness and efficiency of the proposed method, we apply it to two real-world datasets. The results show that our method is capable of detecting patterns of temporal and magnitude deformations with competitive performance

    Provider-Controlled Bandwidth Management for HTTP-based Video Delivery

    Get PDF
    Over the past few years, a revolution in video delivery technology has taken place as mobile viewers and over-the-top (OTT) distribution paradigms have significantly changed the landscape of video delivery services. For decades, high quality video was only available in the home via linear television or physical media. Though Web-based services brought video to desktop and laptop computers, the dominance of proprietary delivery protocols and codecs inhibited research efforts. The recent emergence of HTTP adaptive streaming protocols has prompted a re-evaluation of legacy video delivery paradigms and introduced new questions as to the scalability and manageability of OTT video delivery. This dissertation addresses the question of how to enable for content and network service providers the ability to monitor and manage large numbers of HTTP adaptive streaming clients in an OTT environment. Our early work focused on demonstrating the viability of server-side pacing schemes to produce an HTTP-based streaming server. We also investigated the ability of client-side pacing schemes to work with both commodity HTTP servers and our HTTP streaming server. Continuing our client-side pacing research, we developed our own client-side data proxy architecture which was implemented on a variety of mobile devices and operating systems. We used the portable client architecture as a platform for investigating different rate adaptation schemes and algorithms. We then concentrated on evaluating the network impact of multiple adaptive bitrate clients competing for limited network resources, and developing schemes for enforcing fair access to network resources. The main contribution of this dissertation is the definition of segment-level client and network techniques for enforcing class of service (CoS) differentiation between OTT HTTP adaptive streaming clients. We developed a segment-level network proxy architecture which works transparently with adaptive bitrate clients through the use of segment replacement. We also defined a segment-level rate adaptation algorithm which uses download aborts to enforce CoS differentiation across distributed independent clients. The segment-level abstraction more accurately models application-network interactions and highlights the difference between segment-level and packet-level time scales. Our segment-level CoS enforcement techniques provide a foundation for creating scalable managed OTT video delivery services

    On the development of slime mould morphological, intracellular and heterotic computing devices

    Get PDF
    The use of live biological substrates in the fabrication of unconventional computing (UC) devices is steadily transcending the barriers between science fiction and reality, but efforts in this direction are impeded by ethical considerations, the field’s restrictively broad multidisciplinarity and our incomplete knowledge of fundamental biological processes. As such, very few functional prototypes of biological UC devices have been produced to date. This thesis aims to demonstrate the computational polymorphism and polyfunctionality of a chosen biological substrate — slime mould Physarum polycephalum, an arguably ‘simple’ single-celled organism — and how these properties can be harnessed to create laboratory experimental prototypes of functionally-useful biological UC prototypes. Computing devices utilising live slime mould as their key constituent element can be developed into a) heterotic, or hybrid devices, which are based on electrical recognition of slime mould behaviour via machine-organism interfaces, b) whole-organism-scale morphological processors, whose output is the organism’s morphological adaptation to environmental stimuli (input) and c) intracellular processors wherein data are represented by energetic signalling events mediated by the cytoskeleton, a nano-scale protein network. It is demonstrated that each category of device is capable of implementing logic and furthermore, specific applications for each class may be engineered, such as image processing applications for morphological processors and biosensors in the case of heterotic devices. The results presented are supported by a range of computer modelling experiments using cellular automata and multi-agent modelling. We conclude that P. polycephalum is a polymorphic UC substrate insofar as it can process multimodal sensory input and polyfunctional in its demonstrable ability to undertake a variety of computing problems. Furthermore, our results are highly applicable to the study of other living UC substrates and will inform future work in UC, biosensing, and biomedicine

    Serial and parallel kernelization of Multiple Hitting Set parameterized by the Dilworth number, implemented on the GPU

    Full text link
    The NP-hard Multiple Hitting Set problem is finding a minimum-cardinality set intersecting each of the sets in a given input collection a given number of times. Generalizing a well-known data reduction algorithm due to Weihe, we show a problem kernel for Multiple Hitting Set parameterized by the Dilworth number, a graph parameter introduced by Foldes and Hammer in 1978 yet seemingly so far unexplored in the context of parameterized complexity theory. Using matrix multiplication, we speed up the algorithm to quadratic sequential time and logarithmic parallel time. We experimentally evaluate our algorithms. By implementing our algorithm on GPUs, we show the feasability of realizing kernelization algorithms on SIMD (Single Instruction, Multiple Date) architectures.Comment: Added experiments on one more data se

    Clustering Arabic Tweets for Sentiment Analysis

    Get PDF
    The focus of this study is to evaluate the impact of linguistic preprocessing and similarity functions for clustering Arabic Twitter tweets. The experiments apply an optimized version of the standard K-Means algorithm to assign tweets into positive and negative categories. The results show that root-based stemming has a significant advantage over light stemming in all settings. The Averaged Kullback-Leibler Divergence similarity function clearly outperforms the Cosine, Pearson Correlation, Jaccard Coefficient and Euclidean functions. The combination of the Averaged Kullback-Leibler Divergence and root-based stemming achieved the highest purity of 0.764 while the second-best purity was 0.719. These results are of importance as it is contrary to normal-sized documents where, in many information retrieval applications, light stemming performs better than root-based stemming and the Cosine function is commonly used

    Clustering Arabic Tweets for Sentiment Analysis

    Get PDF
    The focus of this study is to evaluate the impact of linguistic preprocessing and similarity functions for clustering Arabic Twitter tweets. The experiments apply an optimized version of the standard K-Means algorithm to assign tweets into positive and negative categories. The results show that root-based stemming has a significant advantage over light stemming in all settings. The Averaged Kullback-Leibler Divergence similarity function clearly outperforms the Cosine, Pearson Correlation, Jaccard Coefficient and Euclidean functions. The combination of the Averaged Kullback-Leibler Divergence and root-based stemming achieved the highest purity of 0.764 while the second-best purity was 0.719. These results are of importance as it is contrary to normal-sized documents where, in many information retrieval applications, light stemming performs better than root-based stemming and the Cosine function is commonly used
    • …
    corecore