169,487 research outputs found

    Isomorphism Checking for Symmetry Reduction

    Get PDF
    In this paper, we show how isomorphism checking can be used as an effective technique for symmetry reduction. Reduced state spaces are equivalent to the original ones under a strong notion of bisimilarity which preserves the multiplicity of outgoing transitions, and therefore also preserves stochastic temporal logics. We have implemented this in a setting where states are arbitrary graphs. Since no efficiently computable canonical representation is known for arbitrary graphs modulo isomorphism, we define an isomorphism-predicting hash function on the basis of an existing partition refinement algorithm. As an example, we report a factorial state space reduction on a model of an ad-hoc network connectivity protocol

    A Temporal Web Ontology Language

    Get PDF
    The Web Ontology Language (OWL) is the most expressive standard language for modeling ontologies on the Semantic Web. In this paper, we present a temporal extension of the very expressive fragment SHIN(D) of the OWL-DL language resulting in the tOWL language. Through a layered approach we introduce 3 extensions: i) Concrete Domains, that allows the representation of restrictions using concrete domain binary predicates, ii) Temporal Representation, that introduces timepoints, relations between timepoints, intervals, and Allen’s 13 interval relations into the language, and iii) TimeSlices/Fluents, that implements a perdurantist view on individuals and allows for the representation of complex temporal aspects, such as process state transitions. We illustrate the expressiveness of the newly introduced language by providing a TBox representation of Leveraged Buy Out (LBO) processes in financial applications and an ABox representation of one specific LBO

    Hierarchical recurrent neural encoder for video representation with application to captioning

    Full text link
    © 2016 IEEE. Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks

    Verifying Temporal Regular Properties of Abstractions of Term Rewriting Systems

    Get PDF
    The tree automaton completion is an algorithm used for proving safety properties of systems that can be modeled by a term rewriting system. This representation and verification technique works well for proving properties of infinite systems like cryptographic protocols or more recently on Java Bytecode programs. This algorithm computes a tree automaton which represents a (regular) over approximation of the set of reachable terms by rewriting initial terms. This approach is limited by the lack of information about rewriting relation between terms. Actually, terms in relation by rewriting are in the same equivalence class: there are recognized by the same state in the tree automaton. Our objective is to produce an automaton embedding an abstraction of the rewriting relation sufficient to prove temporal properties of the term rewriting system. We propose to extend the algorithm to produce an automaton having more equivalence classes to distinguish a term or a subterm from its successors w.r.t. rewriting. While ground transitions are used to recognize equivalence classes of terms, epsilon-transitions represent the rewriting relation between terms. From the completed automaton, it is possible to automatically build a Kripke structure abstracting the rewriting sequence. States of the Kripke structure are states of the tree automaton and the transition relation is given by the set of epsilon-transitions. States of the Kripke structure are labelled by the set of terms recognized using ground transitions. On this Kripke structure, we define the Regular Linear Temporal Logic (R-LTL) for expressing properties. Such properties can then be checked using standard model checking algorithms. The only difference between LTL and R-LTL is that predicates are replaced by regular sets of acceptable terms

    TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation

    Full text link
    Session-based recommendation nowadays plays a vital role in many websites, which aims to predict users' actions based on anonymous sessions. There have emerged many studies that model a session as a sequence or a graph via investigating temporal transitions of items in a session. However, these methods compress a session into one fixed representation vector without considering the target items to be predicted. The fixed vector will restrict the representation ability of the recommender model, considering the diversity of target items and users' interests. In this paper, we propose a novel target attentive graph neural network (TAGNN) model for session-based recommendation. In TAGNN, target-aware attention adaptively activates different user interests with respect to varied target items. The learned interest representation vector varies with different target items, greatly improving the expressiveness of the model. Moreover, TAGNN harnesses the power of graph neural networks to capture rich item transitions in sessions. Comprehensive experiments conducted on real-world datasets demonstrate its superiority over state-of-the-art methods.Comment: 5 pages, accepted to SIGIR 2020, authors' versio

    Integrating Planning and Scheduling : A Constraint-based Approach

    No full text
    Automated decision making is one of the important problems of Artificial Intelligence (AI). Planning and scheduling are two sub-fields of AI that research automated decision making. The main focus of planning is on general representations of actions, causal reasoning among actions and domain-independent solving strategies. Scheduling generally optimizes problems with complex temporal and resource constraints that have simpler causal relations between actions. However, there are problems that have both planning characteristics (causal constraints) and scheduling characteristics (temporal and resource constraints), and have strong interactions between these constraints. An integrated approach is needed to solve this class of problems efficiently. The main contribution of this thesis is an integrated constraint-based planning and scheduling approach that can model and solve problems that have both planning and scheduling characteristics. In our representation problems are described using a multi-valued state variable planning language with explicit representation of different types of resources, and a new action model where each action is represented by a set of transitions. This action-transition model makes the representation of actions with delayed effects, effects with different durations, and the representation of complex temporal and resource constraints like time-windows, deadline goals, sequence-dependent setup times, etc simpler. Constraint-based techniques have been successfully applied to solve scheduling problems. Therefore, to solve a combined planning/scheduling problem we compile it into a CSP. This compilation is bounded by the number of action occurrences. The constraint model is based on the notion of “support” for each type of transition. The constraint model can be viewed as a system of CSPs, one for each state variable and resource, that are synchronized by a simple temporal network for action start times. Central to our constraint model is the explicit representation and maintenance of the precedence constraints between transitions on the same state variable or resource. We propose a branching scheme for solving the CSP based on establishing supports for transitions, which imply precedence constraints. Furthermore, we propose new propagation and inference techniques that infer precedence relations from temporal and mutex constraints, and infer tighter temporal bounds from the precedence constraints. The distinguishing feature of these inference and propagation techniques is that they not only consider the transitions and actions that are included in the plan but can also consider actions and transitions that are not yet included in or excluded from the plan. We conclude the thesis with a modeling case study of a complex satellite problem domain to demonstrate the effectiveness of our representation. This problem domain has action choices that are tightly coupled with temporal and resource constraints. We show that most of the complexities of this problem can be expressed in our representation in a simple and intuitive way

    Charting the Realms of Mesoscale Cloud Organisation using Unsupervised Learning

    Full text link
    Quantifying the driving mechanisms and effect on Earth's energy budget, of mesoscale shallow cloud organisation, remains difficult. Partly because quantifying the atmosphere's organisational state through objective means remains challenging. We present the first map of the full continuum of convective organisation states by extracting the manifold within an unsupervised neural networks's internal representation. On the manifold distinct organisational regimes, defined in prior work, sit as waymarkers in this continuum. Composition of reanalysis and observations onto the manifold, shows wind-speed and water vapour concentration as key environmental characteristics varying with organisation. We show, for the first time, that mesoscale shallow cloud organisation produces ±1.4%\pm 1.4\% variations in albedo in addition to variations from cloud-fraction changes alone. We further demonstrate how the manifold's continuum representation captures the temporal evolution of organisation. By enabling study of states and transitions in organisation (in simulations and observations) the presented technique paves the way for better representation of shallow clouds in simulations of Earth's future climate

    Deep Latent State Space Models for Time-Series Generation

    Full text link
    Methods based on ordinary differential equations (ODEs) are widely used to build generative models of time-series. In addition to high computational overhead due to explicitly computing hidden states recurrence, existing ODE-based models fall short in learning sequence data with sharp transitions - common in many real-world systems - due to numerical challenges during optimization. In this work, we propose LS4, a generative model for sequences with latent variables evolving according to a state space ODE to increase modeling capacity. Inspired by recent deep state space models (S4), we achieve speedups by leveraging a convolutional representation of LS4 which bypasses the explicit evaluation of hidden states. We show that LS4 significantly outperforms previous continuous-time generative models in terms of marginal distribution, classification, and prediction scores on real-world datasets in the Monash Forecasting Repository, and is capable of modeling highly stochastic data with sharp temporal transitions. LS4 sets state-of-the-art for continuous-time latent generative models, with significant improvement of mean squared error and tighter variational lower bounds on irregularly-sampled datasets, while also being x100 faster than other baselines on long sequences
    • 

    corecore