106,546 research outputs found

    Hybrid SAT-Based Consistency Checking Algorithms for Simple Temporal Networks with Decisions

    Get PDF
    A Simple Temporal Network (STN) consists of time points modeling temporal events and constraints modeling the minimal and maximal temporal distance between them. A Simple Temporal Network with Decisions (STND) extends an STN by adding decision time points to model temporal plans with decisions. A decision time point is a special kind of time point that once executed allows for deciding a truth value for an associated Boolean proposition. Furthermore, STNDs label time points and constraints by conjunctions of literals saying for which scenarios (i.e., complete truth value assignments to the propositions) they are relevant. Thus, an STND models a family of STNs each obtained as a projection of the initial STND onto a scenario. An STND is consistent if there exists a consistent scenario (i.e., a scenario such that the corresponding STN projection is consistent). Recently, a hybrid SAT-based consistency checking algorithm (HSCC) was proposed to check the consistency of an STND. Unfortunately, that approach lacks experimental evaluation and does not allow for the synthesis of all consistent scenarios. In this paper, we propose an incremental HSCC algorithm for STNDs that (i) is faster than the previous one and (ii) allows for the synthesis of all consistent scenarios and related early execution schedules (offline temporal planning). Then, we carry out an experimental evaluation with KAPPA, a tool that we developed for STNDs. Finally, we prove that STNDs and disjunctive temporal networks (DTNs) are equivalent

    A Regularized Graph Layout Framework for Dynamic Network Visualization

    Full text link
    Many real-world networks, including social and information networks, are dynamic structures that evolve over time. Such dynamic networks are typically visualized using a sequence of static graph layouts. In addition to providing a visual representation of the network structure at each time step, the sequence should preserve the mental map between layouts of consecutive time steps to allow a human to interpret the temporal evolution of the network. In this paper, we propose a framework for dynamic network visualization in the on-line setting where only present and past graph snapshots are available to create the present layout. The proposed framework creates regularized graph layouts by augmenting the cost function of a static graph layout algorithm with a grouping penalty, which discourages nodes from deviating too far from other nodes belonging to the same group, and a temporal penalty, which discourages large node movements between consecutive time steps. The penalties increase the stability of the layout sequence, thus preserving the mental map. We introduce two dynamic layout algorithms within the proposed framework, namely dynamic multidimensional scaling (DMDS) and dynamic graph Laplacian layout (DGLL). We apply these algorithms on several data sets to illustrate the importance of both grouping and temporal regularization for producing interpretable visualizations of dynamic networks.Comment: To appear in Data Mining and Knowledge Discovery, supporting material (animations and MATLAB toolbox) available at http://tbayes.eecs.umich.edu/xukevin/visualization_dmkd_201

    Generalized Rank Pooling for Activity Recognition

    Full text link
    Most popular deep models for action recognition split video sequences into short sub-sequences consisting of a few frames; frame-based features are then pooled for recognizing the activity. Usually, this pooling step discards the temporal order of the frames, which could otherwise be used for better recognition. Towards this end, we propose a novel pooling method, generalized rank pooling (GRP), that takes as input, features from the intermediate layers of a CNN that is trained on tiny sub-sequences, and produces as output the parameters of a subspace which (i) provides a low-rank approximation to the features and (ii) preserves their temporal order. We propose to use these parameters as a compact representation for the video sequence, which is then used in a classification setup. We formulate an objective for computing this subspace as a Riemannian optimization problem on the Grassmann manifold, and propose an efficient conjugate gradient scheme for solving it. Experiments on several activity recognition datasets show that our scheme leads to state-of-the-art performance.Comment: Accepted at IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 201

    Checking Dynamic Consistency of Conditional Hyper Temporal Networks via Mean Payoff Games (Hardness and (pseudo) Singly-Exponential Time Algorithm)

    Full text link
    In this work we introduce the \emph{Conditional Hyper Temporal Network (CHyTN)} model, which is a natural extension and generalization of both the \CSTN and the \HTN model. Our contribution goes as follows. We show that deciding whether a given \CSTN or CHyTN is dynamically consistent is \coNP-hard. Then, we offer a proof that deciding whether a given CHyTN is dynamically consistent is \PSPACE-hard, provided that the input instances are allowed to include both multi-head and multi-tail hyperarcs. In light of this, we continue our study by focusing on CHyTNs that allow only multi-head or only multi-tail hyperarcs, and we offer the first deterministic (pseudo) singly-exponential time algorithm for the problem of checking the dynamic-consistency of such CHyTNs, also producing a dynamic execution strategy whenever the input CHyTN is dynamically consistent. Since \CSTN{s} are a special case of CHyTNs, this provides as a byproduct the first sound-and-complete (pseudo) singly-exponential time algorithm for checking dynamic-consistency in CSTNs. The proposed algorithm is based on a novel connection between CSTN{s}/CHyTN{s} and Mean Payoff Games. The presentation of the connection between \CSTN{s}/CHyTNs and \MPG{s} is mediated by the \HTN model. In order to analyze the algorithm, we introduce a refined notion of dynamic-consistency, named ϵ\epsilon-dynamic-consistency, and present a sharp lower bounding analysis on the critical value of the reaction time ε^\hat{\varepsilon} where a \CSTN/CHyTN transits from being, to not being, dynamically consistent. The proof technique introduced in this analysis of ε^\hat{\varepsilon} is applicable more generally when dealing with linear difference constraints which include strict inequalities.Comment: arXiv admin note: text overlap with arXiv:1505.0082

    Linear Optimal Power Flow Using Cycle Flows

    Full text link
    Linear optimal power flow (LOPF) algorithms use a linearization of the alternating current (AC) load flow equations to optimize generator dispatch in a network subject to the loading constraints of the network branches. Common algorithms use the voltage angles at the buses as optimization variables, but alternatives can be computationally advantageous. In this article we provide a review of existing methods and describe a new formulation that expresses the loading constraints directly in terms of the flows themselves, using a decomposition of the network graph into a spanning tree and closed cycles. We provide a comprehensive study of the computational performance of the various formulations, in settings that include computationally challenging applications such as multi-period LOPF with storage dispatch and generation capacity expansion. We show that the new formulation of the LOPF solves up to 7 times faster than the angle formulation using a commercial linear programming solver, while another existing cycle-based formulation solves up to 20 times faster, with an average speed-up of factor 3 for the standard networks considered here. If generation capacities are also optimized, the average speed-up rises to a factor of 12, reaching up to factor 213 in a particular instance. The speed-up is largest for networks with many buses and decentral generators throughout the network, which is highly relevant given the rise of distributed renewable generation and the computational challenge of operation and planning in such networks.Comment: 11 pages, 5 figures; version 2 includes results for generation capacity optimization; version 3 is the final accepted journal versio

    Dynamic Consistency of Conditional Simple Temporal Networks via Mean Payoff Games: a Singly-Exponential Time DC-Checking

    Full text link
    Conditional Simple Temporal Network (CSTN) is a constraint-based graph-formalism for conditional temporal planning. It offers a more flexible formalism than the equivalent CSTP model of Tsamardinos, Vidal and Pollack, from which it was derived mainly as a sound formalization. Three notions of consistency arise for CSTNs and CSTPs: weak, strong, and dynamic. Dynamic consistency is the most interesting notion, but it is also the most challenging and it was conjectured to be hard to assess. Tsamardinos, Vidal and Pollack gave a doubly-exponential time algorithm for deciding whether a CSTN is dynamically-consistent and to produce, in the positive case, a dynamic execution strategy of exponential size. In the present work we offer a proof that deciding whether a CSTN is dynamically-consistent is coNP-hard and provide the first singly-exponential time algorithm for this problem, also producing a dynamic execution strategy whenever the input CSTN is dynamically-consistent. The algorithm is based on a novel connection with Mean Payoff Games, a family of two-player combinatorial games on graphs well known for having applications in model-checking and formal verification. The presentation of such connection is mediated by the Hyper Temporal Network model, a tractable generalization of Simple Temporal Networks whose consistency checking is equivalent to determining Mean Payoff Games. In order to analyze the algorithm we introduce a refined notion of dynamic-consistency, named \epsilon-dynamic-consistency, and present a sharp lower bounding analysis on the critical value of the reaction time \hat{\varepsilon} where the CSTN transits from being, to not being, dynamically-consistent. The proof technique introduced in this analysis of \hat{\varepsilon} is applicable more in general when dealing with linear difference constraints which include strict inequalities

    Decentralised Control of Adaptive Sampling in Wireless Sensor Networks

    No full text
    The efficient allocation of the limited energy resources of a wireless sensor network in a way that maximises the information value of the data collected is a significant research challenge. Within this context, this paper concentrates on adaptive sampling as a means of focusing a sensor’s energy consumption on obtaining the most important data. Specifically, we develop a principled information metric based upon Fisher information and Gaussian process regression that allows the information content of a sensor’s observations to be expressed. We then use this metric to derive three novel decentralised control algorithms for information-based adaptive sampling which represent a trade-off in computational cost and optimality. These algorithms are evaluated in the context of a deployed sensor network in the domain of flood monitoring. The most computationally efficient of the three is shown to increase the value of information gathered by approximately 83%, 27%, and 8% per day compared to benchmarks that sample in a naive non-adaptive manner, in a uniform non-adaptive manner, and using a state-of-the-art adaptive sampling heuristic (USAC) correspondingly. Moreover, our algorithm collects information whose total value is approximately 75% of the optimal solution (which requires an exponential, and thus impractical, amount of time to compute)
    corecore