150 research outputs found

    Study Of Statistical Models For Route Prediction Algorithms In VANET

    Get PDF
    Vehicle-to-vehicle communication is a concept greatly studied during the past years. Vehicles equipped with devices capable of short-range wireless connectivity can form a particular mobile ad-hoc network, called a Vehicular Ad-hoc NETwork (or VANET). The users of a VANET, drivers or passengers, can be provided with useful information and with a wide range of interesting services. Route prediction is the missing piece in several proposed ideas for intelligent vehicles. In this paper, we are studying the algorithms that predict a vehicle's entire route as it is driven. Such predictions are useful for giving the driver warnings about upcoming traffic hazards or information about upcoming points of interest, including advertising. This paper describes the route Prediction algorithms using Markov Model, Hidden Markov Model (HMM), Variable order Markov model (VMM). Keywords: VANET, MANET, ITs, GPS, HMM, VMM, PST

    A Deep Context Grammatical Model For Authorship Attribution

    Get PDF
    We define a variable-order Markov model, representing a Probabilistic Context Free Grammar, built from the sentence-level, delexicalized parse of source texts generated by a standard lexicalized parser, which we apply to the authorship attribution task. First, we motivate this model in the context of previous research on syntactic features in the area, outlining some of the general strengths and limitations of the overall approach. Next we describe the procedure for building syntactic models for each author based on training cases. We then outline the attribution process – assigning authorship to the model which yields the highest probability for the given test case. We demonstrate the efficacy for authorship attribution over different Markov orders and compare it against syntactic features trained by a linear kernel SVM. We find that the model performs somewhat less successfully than the SVM over similar features. In the conclusion, we outline how we plan to employ the model for syntactic evaluation of literary texts

    PPM-Decay: A computational model of auditory prediction with memory decay

    Get PDF
    Statistical learning and probabilistic prediction are fundamental processes in auditory cognition. A prominent computational model of these processes is Prediction by Partial Matching (PPM), a variable-order Markov model that learns by internalizing n-grams from training sequences. However, PPM has limitations as a cognitive model: in particular, it has a perfect memory that weights all historic observations equally, which is inconsistent with memory capacity constraints and recency effects observed in human cognition. We address these limitations with PPM-Decay, a new variant of PPM that introduces a customizable memory decay kernel. In three studies—one with artificially generated sequences, one with chord sequences from Western music, and one with new behavioral data from an auditory pattern detection experiment—we show how this decay kernel improves the model’s predictive performance for sequences whose underlying statistics change over time, and enables the model to capture effects of memory constraints on auditory pattern detection. The resulting model is available in our new open-source R package, ppm (https://github.com/pmcharrison/ppm)

    Maximum entropy models capture melodic styles

    Full text link
    We introduce a Maximum Entropy model able to capture the statistics of melodies in music. The model can be used to generate new melodies that emulate the style of the musical corpus which was used to train it. Instead of using the n−n-body interactions of (n−1)−(n-1)-order Markov models, traditionally used in automatic music generation, we use a k−k-nearest neighbour model with pairwise interactions only. In that way, we keep the number of parameters low and avoid over-fitting problems typical of Markov models. We show that long-range musical phrases don't need to be explicitly enforced using high-order Markov interactions, but can instead emerge from multiple, competing, pairwise interactions. We validate our Maximum Entropy model by contrasting how much the generated sequences capture the style of the original corpus without plagiarizing it. To this end we use a data-compression approach to discriminate the levels of borrowing and innovation featured by the artificial sequences. The results show that our modelling scheme outperforms both fixed-order and variable-order Markov models. This shows that, despite being based only on pairwise interactions, this Maximum Entropy scheme opens the possibility to generate musically sensible alterations of the original phrases, providing a way to generate innovation

    Bayesian analysis of variable-order, reversible Markov chains

    Full text link
    We define a conjugate prior for the reversible Markov chain of order rr. The prior arises from a partially exchangeable reinforced random walk, in the same way that the Beta distribution arises from the exchangeable Poly\'{a} urn. An extension to variable-order Markov chains is also derived. We show the utility of this prior in testing the order and estimating the parameters of a reversible Markov model.Comment: Published in at http://dx.doi.org/10.1214/10-AOS857 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Breaking the habit: measuring and predicting departures from routine in individual human mobility

    No full text
    Researchers studying daily life mobility patterns have recently shown that humans are typically highly predictable in their movements. However, no existing work has examined the boundaries of this predictability, where human behaviour transitions temporarily from routine patterns to highly unpredictable states. To address this shortcoming, we tackle two interrelated challenges. First, we develop a novel information-theoretic metric, called instantaneous entropy, to analyse an individual’s mobility patterns and identify temporary departures from routine. Second, to predict such departures in the future, we propose the first Bayesian framework that explicitly models breaks from routine, showing that it outperforms current state-of-the-art predictor
    • …
    corecore