37,515 research outputs found
Loop-Free Backpressure Routing Using Link-Reversal Algorithms
The backpressure routing policy is known to be a throughput optimal policy that supports any feasible traffic demand in data networks, but may have poor delay performance when packets traverse loops in the network. In this paper, we study loop-free backpressure routing policies that forward packets along directed acyclic graphs (DAGs) to avoid the looping problem. These policies use link reversal algorithms to improve the DAGs in order to support any achievable traffic demand.
For a network with a single commodity, we show that a DAG that supports a given traffic demand can be found after a finite number of iterations of the link-reversal process. We use this to develop a joint link-reversal and backpressure routing policy, called the loop free backpressure (LFBP) algorithm. This algorithm forwards packets on the DAG, while the DAG is dynamically updated based on the growth of the queue backlogs. We show by simulations that such a DAG-based policy improves the delay over the classical backpressure routing policy. We also propose a multicommodity version of the LFBP algorithm, and via simulation we show that its delay performance is better than that of backpressure.National Science Foundation (U.S.) (Grant CNS-1116209)United States. Office of Naval Research (Grant N00014-12-1-0064
Analysis of Link Reversal Routing Algorithms for Mobile Ad Hoc Networks
Link reversal (LR) algorithms provide a simple mechanisme for routing in communication networks whose topology is frequently changing, such as in mobile and ad hoc networks. A LR algorithm routes by imposing a direction on each network link such that the resulting graph is destination oriented (DAG). Whenever a node loses routes to the destination, is reacts by reversing some (or all) of its incident links. This survey presents the worst-case performance analysis of LR algorithms from the excellent work of Costas Busch and Srikanta Tirthapura (SIAM J. on Computing, 35(2):305- 326, 2005). The LR algorithms are studied in terms of work (number of node reversals) and time needed until the algorithm stabilizes to a state in which all the routes are reestablished. The full reversal algorithm and the partial reversal algorithm are considered. • The full reversal algorithm requires O(n2) work and time, where n is the number of nodes that have lost routes to the destination. This bound is tight in the worst case. • The partial reversal algorithm requires O(na*r + n2) work and time, where a*r is a non-negative integral function of the initial state of the network. Further, the partial reversal algorithm requires (na*r + n2) work and time. • There is an inherent lower bound on the worst-case performance of LR algorithms: \Omega(n2). Therefore, surprisingly, the full reversal algorithm is asymptotically optimal in the worst-case, while the partial reversal algorithm is not; since a*r can be arbitrarily larger than n
Partial Reversal Acyclicity
Partial Reversal (PR) is a link reversal algorithm which ensures that the underlying graph structure is destination-oriented and acyclic. These properties of PR make it useful in routing protocols and algorithms for solving leader election and mutual exclusion. While proofs exist to establish the acyclicity property of PR, they rely on assigning labels to either the nodes or the edges in the graph. In this work we present simpler direct proof of the acyclicity property of partial reversal without using any external or dynamic labeling mechanism. First, we provide a simple variant of the PR algorithm, and show that it maintains acyclicity. Next, we present a binary relation which maps the original PR algorithm to the new algorithm, and finally, we conclude that the acyclicity proof applies to the original PR algorithm as well
Searching for Bayesian Network Structures in the Space of Restricted Acyclic Partially Directed Graphs
Although many algorithms have been designed to construct Bayesian network
structures using different approaches and principles, they all employ only two
methods: those based on independence criteria, and those based on a scoring
function and a search procedure (although some methods combine the two). Within
the score+search paradigm, the dominant approach uses local search methods in
the space of directed acyclic graphs (DAGs), where the usual choices for
defining the elementary modifications (local changes) that can be applied are
arc addition, arc deletion, and arc reversal. In this paper, we propose a new
local search method that uses a different search space, and which takes account
of the concept of equivalence between network structures: restricted acyclic
partially directed graphs (RPDAGs). In this way, the number of different
configurations of the search space is reduced, thus improving efficiency.
Moreover, although the final result must necessarily be a local optimum given
the nature of the search method, the topology of the new search space, which
avoids making early decisions about the directions of the arcs, may help to
find better local optima than those obtained by searching in the DAG space.
Detailed results of the evaluation of the proposed search method on several
test problems, including the well-known Alarm Monitoring System, are also
presented
The use of real time digital simulation and hardware in the loop to de-risk novel control algorithms
Low power demonstrators are commonly used to validate novel control algorithms. However, the response of the demonstrator to network transients and faults is often unexplored. The importance of this work has, in the past, justified facilities such as the T45 Shore Integration Test Facility (SITF) at the Electric Ship Technology Demonstrator (ESTD). This paper presents the use of real time digital simulation and hardware in the loop to de-risk a innovative control algorithm with respect to network transients and faults. A novel feature of the study is the modelling of events at the power electronics level (time steps of circa 2 ÎĽs) and the system level (time steps of circa 50 ÎĽs)
Policy Recognition in the Abstract Hidden Markov Model
In this paper, we present a method for recognising an agent's behaviour in
dynamic, noisy, uncertain domains, and across multiple levels of abstraction.
We term this problem on-line plan recognition under uncertainty and view it
generally as probabilistic inference on the stochastic process representing the
execution of the agent's plan. Our contributions in this paper are twofold. In
terms of probabilistic inference, we introduce the Abstract Hidden Markov Model
(AHMM), a novel type of stochastic processes, provide its dynamic Bayesian
network (DBN) structure and analyse the properties of this network. We then
describe an application of the Rao-Blackwellised Particle Filter to the AHMM
which allows us to construct an efficient, hybrid inference method for this
model. In terms of plan recognition, we propose a novel plan recognition
framework based on the AHMM as the plan execution model. The Rao-Blackwellised
hybrid inference for AHMM can take advantage of the independence properties
inherent in a model of plan execution, leading to an algorithm for online
probabilistic plan recognition that scales well with the number of levels in
the plan hierarchy. This illustrates that while stochastic models for plan
execution can be complex, they exhibit special structures which, if exploited,
can lead to efficient plan recognition algorithms. We demonstrate the
usefulness of the AHMM framework via a behaviour recognition system in a
complex spatial environment using distributed video surveillance data
Average-case analysis of perfect sorting by reversals (Journal Version)
Perfect sorting by reversals, a problem originating in computational
genomics, is the process of sorting a signed permutation to either the identity
or to the reversed identity permutation, by a sequence of reversals that do not
break any common interval. B\'erard et al. (2007) make use of strong interval
trees to describe an algorithm for sorting signed permutations by reversals.
Combinatorial properties of this family of trees are essential to the algorithm
analysis. Here, we use the expected value of certain tree parameters to prove
that the average run-time of the algorithm is at worst, polynomial, and
additionally, for sufficiently long permutations, the sorting algorithm runs in
polynomial time with probability one. Furthermore, our analysis of the subclass
of commuting scenarios yields precise results on the average length of a
reversal, and the average number of reversals.Comment: A preliminary version of this work appeared in the proceedings of
Combinatorial Pattern Matching (CPM) 2009. See arXiv:0901.2847; Discrete
Mathematics, Algorithms and Applications, vol. 3(3), 201
- …