61,185 research outputs found

    Analysing Temporal Relations – Beyond Windows, Frames and Predicates

    Get PDF
    This article proposes an approach to rely on the standard operators of relational algebra (including grouping and ag- gregation) for processing complex event without requiring window specifications. In this way the approach can pro- cess complex event queries of the kind encountered in appli- cations such as emergency management in metro networks. This article presents Temporal Stream Algebra (TSA) which combines the operators of relational algebra with an analy- sis of temporal relations at compile time. This analysis de- termines which relational algebra queries can be evaluated against data streams, i. e. the analysis is able to distinguish valid from invalid stream queries. Furthermore the analysis derives functions similar to the pass, propagation and keep invariants in Tucker's et al. \Exploiting Punctuation Seman- tics in Continuous Data Streams". These functions enable the incremental evaluation of TSA queries, the propagation of punctuations, and garbage collection. The evaluation of TSA queries combines bulk-wise and out-of-order processing which makes it tolerant to workload bursts as they typically occur in emergency management. The approach has been conceived for efficiently processing complex event queries on top of a relational database system. It has been deployed and tested on MonetDB

    Balancing Global Exploration and Local-connectivity Exploitation with Rapidly-exploring Random disjointed-Trees

    Full text link
    Sampling efficiency in a highly constrained environment has long been a major challenge for sampling-based planners. In this work, we propose Rapidly-exploring Random disjointed-Trees* (RRdT*), an incremental optimal multi-query planner. RRdT* uses multiple disjointed-trees to exploit local-connectivity of spaces via Markov Chain random sampling, which utilises neighbourhood information derived from previous successful and failed samples. To balance local exploitation, RRdT* actively explore unseen global spaces when local-connectivity exploitation is unsuccessful. The active trade-off between local exploitation and global exploration is formulated as a multi-armed bandit problem. We argue that the active balancing of global exploration and local exploitation is the key to improving sample efficient in sampling-based motion planners. We provide rigorous proofs of completeness and optimal convergence for this novel approach. Furthermore, we demonstrate experimentally the effectiveness of RRdT*'s locally exploring trees in granting improved visibility for planning. Consequently, RRdT* outperforms existing state-of-the-art incremental planners, especially in highly constrained environments.Comment: Submitted to IEEE International Conference on Robotics and Automation (ICRA) 201

    Temporal Stream Algebra

    Get PDF
    Data stream management systems (DSMS) so far focus on event queries and hardly consider combined queries to both data from event streams and from a database. However, applications like emergency management require combined data stream and database queries. Further requirements are the simultaneous use of multiple timestamps after different time lines and semantics, expressive temporal relations between multiple time-stamps and exible negation, grouping and aggregation which can be controlled, i. e. started and stopped, by events and are not limited to fixed-size time windows. Current DSMS hardly address these requirements. This article proposes Temporal Stream Algebra (TSA) so as to meet the afore mentioned requirements. Temporal streams are a common abstraction of data streams and data- base relations; the operators of TSA are generalizations of the usual operators of Relational Algebra. A in-depth 'analysis of temporal relations guarantees that valid TSA expressions are non-blocking, i. e. can be evaluated incrementally. In this respect TSA differs significantly from previous algebraic approaches which use specialized operators to prevent blocking expressions on a "syntactical" level

    Practical and Efficient Split Decomposition via Graph-Labelled Trees

    Full text link
    Split decomposition of graphs was introduced by Cunningham (under the name join decomposition) as a generalization of the modular decomposition. This paper undertakes an investigation into the algorithmic properties of split decomposition. We do so in the context of graph-labelled trees (GLTs), a new combinatorial object designed to simplify its consideration. GLTs are used to derive an incremental characterization of split decomposition, with a simple combinatorial description, and to explore its properties with respect to Lexicographic Breadth-First Search (LBFS). Applying the incremental characterization to an LBFS ordering results in a split decomposition algorithm that runs in time O(n+m)α(n+m)O(n+m)\alpha(n+m), where α\alpha is the inverse Ackermann function, whose value is smaller than 4 for any practical graph. Compared to Dahlhaus' linear-time split decomposition algorithm [Dahlhaus'00], which does not rely on an incremental construction, our algorithm is just as fast in all but the asymptotic sense and full implementation details are given in this paper. Also, our algorithm extends to circle graph recognition, whereas no such extension is known for Dahlhaus' algorithm. The companion paper [Gioan et al.] uses our algorithm to derive the first sub-quadratic circle graph recognition algorithm

    Partition strategies for incremental Mini-Bucket

    Get PDF
    Los modelos en grafo probabilísticos, tales como los campos aleatorios de Markov y las redes bayesianas, ofrecen poderosos marcos de trabajo para la representación de conocimiento y el razonamiento en modelos con gran número de variables. Sin embargo, los problemas de inferencia exacta en modelos de grafos son NP-hard en general, lo que ha causado que se produzca bastante interés en métodos de inferencia aproximados. El mini-bucket incremental es un marco de trabajo para inferencia aproximada que produce como resultado límites aproximados inferior y superior de la función de partición exacta, a base de -empezando a partir de un modelo con todos los constraints relajados, es decir, con las regiones más pequeñas posibleincrementalmente añadir regiones más grandes a la aproximación. Los métodos de inferencia aproximada que existen actualmente producen límites superiores ajustados de la función de partición, pero los límites inferiores suelen ser demasiado imprecisos o incluso triviales. El objetivo de este proyecto es investigar estrategias de partición que mejoren los límites inferiores obtenidos con el algoritmo de mini-bucket, trabajando dentro del marco de trabajo de mini-bucket incremental. Empezamos a partir de la idea de que creemos que debería ser beneficioso razonar conjuntamente con las variables de un modelo que tienen una alta correlación, y desarrollamos una estrategia para la selección de regiones basada en esa idea. Posteriormente, implementamos nuestra estrategia y exploramos formas de mejorarla, y finalmente medimos los resultados obtenidos usando nuestra estrategia y los comparamos con varios métodos de referencia. Nuestros resultados indican que nuestra estrategia obtiene límites inferiores más ajustados que nuestros dos métodos de referencia. También consideramos y descartamos dos posibles hipótesis que podrían explicar esta mejora.Els models en graf probabilístics, com bé els camps aleatoris de Markov i les xarxes bayesianes, ofereixen poderosos marcs de treball per la representació del coneixement i el raonament en models amb grans quantitats de variables. Tanmateix, els problemes d’inferència exacta en models de grafs son NP-hard en general, el qual ha provocat que es produeixi bastant d’interès en mètodes d’inferència aproximats. El mini-bucket incremental es un marc de treball per a l’inferència aproximada que produeix com a resultat límits aproximats inferior i superior de la funció de partició exacta que funciona començant a partir d’un model al qual se li han relaxat tots els constraints -és a dir, un model amb les regions més petites possibles- i anar afegint a l’aproximació regions incrementalment més grans. Els mètodes d’inferència aproximada que existeixen actualment produeixen límits superiors ajustats de la funció de partició. Tanmateix, els límits inferiors acostumen a ser massa imprecisos o fins aviat trivials. El objectiu d’aquest projecte es recercar estratègies de partició que millorin els límits inferiors obtinguts amb l’algorisme de mini-bucket, treballant dins del marc de treball del mini-bucket incremental. La nostra idea de partida pel projecte es que creiem que hauria de ser beneficiós per la qualitat de l’aproximació raonar conjuntament amb les variables del model que tenen una alta correlació entre elles, i desenvolupem una estratègia per a la selecció de regions basada en aquesta idea. Posteriorment, implementem la nostra estratègia i explorem formes de millorar-la, i finalment mesurem els resultats obtinguts amb la nostra estratègia i els comparem a diversos mètodes de referència. Els nostres resultats indiquen que la nostra estratègia obté límits inferiors més ajustats que els nostres dos mètodes de referència. També considerem i descartem dues possibles hipòtesis que podrien explicar aquesta millora.Probabilistic graphical models such as Markov random fields and Bayesian networks provide powerful frameworks for knowledge representation and reasoning over models with large numbers of variables. Unfortunately, exact inference problems on graphical models are generally NP-hard, which has led to signifi- cant interest in approximate inference algorithms. Incremental mini-bucket is a framework for approximate inference that provides upper and lower bounds on the exact partition function by, starting from a model with completely relaxed constraints, i.e. with the smallest possible regions, incrementally adding larger regions to the approximation. Current approximate inference algorithms provide tight upper bounds on the exact partition function but loose or trivial lower bounds. This project focuses on researching partitioning strategies that improve the lower bounds obtained with mini-bucket elimination, working within the framework of incremental mini-bucket. We start from the idea that variables that are highly correlated should be reasoned about together, and we develop a strategy for region selection based on that idea. We implement the strategy and explore ways to improve it, and finally we measure the results obtained using the strategy and compare them to several baselines. We find that our strategy performs better than both of our baselines. We also rule out several possible explanations for the improvement
    • …
    corecore