3 research outputs found

    GODDeS: Globally \epsilon-Optimal Routing Via Distributed Decision-theoretic Self-organization

    Full text link
    This paper introduces GODDeS: a fully distributed self-organizing decision-theoretic routing algorithm designed to effectively exploit high quality paths in lossy ad-hoc wireless environments, typically with a large number of nodes. The routing problem is modeled as an optimal control problem for a decentralized Markov Decision Process, with links characterized by locally known packet drop probabilities that either remain constant on average or change slowly. The equivalence of this optimization problem to that of performance maximization of an explicitly constructed probabilistic automata allows us to effectively apply the theory of quantitative measures of probabilistic regular languages, and design a distributed highly efficient solution approach that attempts to minimize source-to-sink drop probabilities across the network. Theoretical results provide rigorous guarantees on global performance, showing that the algorithm achieves near-global optimality, in polynomial time. It is also argued that GODDeS is significantly congestion-aware, and exploits multi-path routes optimally. Theoretical development is supported by high-fidelity network simulations.Comment: 14 pages 6 figures : This is a preliminary pre-print. Full version has been submitted for review elsewher

    Formal-language-theoretic Optimal Path Planning For Accommodation of Amortized Uncertainties and Dynamic Effects

    Full text link
    We report a globally-optimal approach to robotic path planning under uncertainty, based on the theory of quantitative measures of formal languages. A significant generalization to the language-measure-theoretic path planning algorithm \nustar is presented that explicitly accounts for average dynamic uncertainties and estimation errors in plan execution. The notion of the navigation automaton is generalized to include probabilistic uncontrollable transitions, which account for uncertainties by modeling and planning for probabilistic deviations from the computed policy in the course of execution. The planning problem is solved by casting it in the form of a performance maximization problem for probabilistic finite state automata. In essence we solve the following optimization problem: Compute the navigation policy which maximizes the probability of reaching the goal, while simultaneously minimizing the probability of hitting an obstacle. Key novelties of the proposed approach include the modeling of uncertainties using the concept of uncontrollable transitions, and the solution of the ensuing optimization problem using a highly efficient search-free combinatorial approach to maximize quantitative measures of probabilistic regular languages. Applicability of the algorithm in various models of robot navigation has been shown with experimental validation on a two-wheeled mobile robotic platform (SEGWAY RMP 200) in a laboratory environment.Comment: Submitted for review for possible publication elsewhere; journal reference will be added when availabl

    Data Smashing

    Full text link
    Investigation of the underlying physics or biology from empirical data requires a quantifiable notion of similarity - when do two observed data sets indicate nearly identical generating processes, and when they do not. The discriminating characteristics to look for in data is often determined by heuristics designed by experts, e.g.e.g., distinct shapes of "folded" lightcurves may be used as "features" to classify variable stars, while determination of pathological brain states might require a Fourier analysis of brainwave activity. Finding good features is non-trivial. Here, we propose a universal solution to this problem: we delineate a principle for quantifying similarity between sources of arbitrary data streams, without a priori knowledge, features or training. We uncover an algebraic structure on a space of symbolic models for quantized data, and show that such stochastic generators may be added and uniquely inverted; and that a model and its inverse always sum to the generator of flat white noise. Therefore, every data stream has an anti-stream: data generated by the inverse model. Similarity between two streams, then, is the degree to which one, when summed to the other's anti-stream, mutually annihilates all statistical structure to noise. We call this data smashing. We present diverse applications, including disambiguation of brainwaves pertaining to epileptic seizures, detection of anomalous cardiac rhythms, and classification of astronomical objects from raw photometry. In our examples, the data smashing principle, without access to any domain knowledge, meets or exceeds the performance of specialized algorithms tuned by domain experts
    corecore