54 research outputs found

    Convergence Thresholds of Newton's Method for Monotone Polynomial Equations

    Get PDF
    Monotone systems of polynomial equations (MSPEs) are systems of fixed-point equations X1=f1(X1,...,Xn),X_1 = f_1(X_1, ..., X_n), ...,Xn=fn(X1,...,Xn)..., X_n = f_n(X_1, ..., X_n) where each fif_i is a polynomial with positive real coefficients. The question of computing the least non-negative solution of a given MSPE X=f(X)\vec X = \vec f(\vec X) arises naturally in the analysis of stochastic models such as stochastic context-free grammars, probabilistic pushdown automata, and back-button processes. Etessami and Yannakakis have recently adapted Newton's iterative method to MSPEs. In a previous paper we have proved the existence of a threshold kfk_{\vec f} for strongly connected MSPEs, such that after kfk_{\vec f} iterations of Newton's method each new iteration computes at least 1 new bit of the solution. However, the proof was purely existential. In this paper we give an upper bound for kfk_{\vec f} as a function of the minimal component of the least fixed-point μf\mu\vec f of f(X)\vec f(\vec X). Using this result we show that kfk_{\vec f} is at most single exponential resp. linear for strongly connected MSPEs derived from probabilistic pushdown automata resp. from back-button processes. Further, we prove the existence of a threshold for arbitrary MSPEs after which each new iteration computes at least 1/w2h1/w2^h new bits of the solution, where ww and hh are the width and height of the DAG of strongly connected components.Comment: version 2 deposited February 29, after the end of the STACS conference. Two minor mistakes correcte

    Computing the Least Fixed Point of Positive Polynomial Systems

    Full text link
    We consider equation systems of the form X_1 = f_1(X_1, ..., X_n), ..., X_n = f_n(X_1, ..., X_n) where f_1, ..., f_n are polynomials with positive real coefficients. In vector form we denote such an equation system by X = f(X) and call f a system of positive polynomials, short SPP. Equation systems of this kind appear naturally in the analysis of stochastic models like stochastic context-free grammars (with numerous applications to natural language processing and computational biology), probabilistic programs with procedures, web-surfing models with back buttons, and branching processes. The least nonnegative solution mu f of an SPP equation X = f(X) is of central interest for these models. Etessami and Yannakakis have suggested a particular version of Newton's method to approximate mu f. We extend a result of Etessami and Yannakakis and show that Newton's method starting at 0 always converges to mu f. We obtain lower bounds on the convergence speed of the method. For so-called strongly connected SPPs we prove the existence of a threshold k_f such that for every i >= 0 the (k_f+i)-th iteration of Newton's method has at least i valid bits of mu f. The proof yields an explicit bound for k_f depending only on syntactic parameters of f. We further show that for arbitrary SPP equations Newton's method still converges linearly: there are k_f>=0 and alpha_f>0 such that for every i>=0 the (k_f+alpha_f i)-th iteration of Newton's method has at least i valid bits of mu f. The proof yields an explicit bound for alpha_f; the bound is exponential in the number of equations, but we also show that it is essentially optimal. Constructing a bound for k_f is still an open problem. Finally, we also provide a geometric interpretation of Newton's method for SPPs.Comment: This is a technical report that goes along with an article to appear in SIAM Journal on Computing

    RePBubLik: Reducing the Polarized Bubble Radius with Link Insertions

    Full text link
    The topology of the hyperlink graph among pages expressing different opinions may influence the exposure of readers to diverse content. Structural bias may trap a reader in a polarized bubble with no access to other opinions. We model readers' behavior as random walks. A node is in a polarized bubble if the expected length of a random walk from it to a page of different opinion is large. The structural bias of a graph is the sum of the radii of highly-polarized bubbles. We study the problem of decreasing the structural bias through edge insertions. Healing all nodes with high polarized bubble radius is hard to approximate within a logarithmic factor, so we focus on finding the best kk edges to insert to maximally reduce the structural bias. We present RePBubLik, an algorithm that leverages a variant of the random walk closeness centrality to select the edges to insert. RePBubLik obtains, under mild conditions, a constant-factor approximation. It reduces the structural bias faster than existing edge-recommendation methods, including some designed to reduce the polarization of a graph

    The effect of the back button in a random walk: application for pagerank

    Get PDF
    International audienceTheoretical analysis of the Web graph is often used to improve the efficiency of search engines. The PageRank algorithm, proposed by Page, Brin et al., is used by the Google search engine to improve the results of the queries. The purpose of this article is to describe an enhanced version of the algorithm using a realistic model for the back button. We introduce a limited history stack model (you cannot click more than m times in a row), and show that when m = 1, the computation of this Back PageRank can be as fast as that of a standard PageRank

    Polynomial Time Algorithms for Multi-Type Branching Processes and Stochastic Context-Free Grammars

    Get PDF
    We show that one can approximate the least fixed point solution for a multivariate system of monotone probabilistic polynomial equations in time polynomial in both the encoding size of the system of equations and in log(1/\epsilon), where \epsilon > 0 is the desired additive error bound of the solution. (The model of computation is the standard Turing machine model.) We use this result to resolve several open problems regarding the computational complexity of computing key quantities associated with some classic and heavily studied stochastic processes, including multi-type branching processes and stochastic context-free grammars

    Recursive Stochastic Games with Positive Rewards

    Get PDF
    Abstract. We study the complexity of a class of Markov decision processes and, more generally, stochastic games, called 1-exit Recursive Markov Decision Processes (1-RMDPs) and Simple Stochastic Games (1-RSSGs) with strictly positive rewards. These are a class of finitely presented countable-state zero-sum stochastic games, with total expected reward objective. They subsume standard finite-state MDPs and Condon’s simple stochastic games and correspond to optimization and game versions of several classic stochastic models, with rewards. Such stochastic models arise naturally as models of probabilistic procedural programs with recursion, and the problems we address are motivated by the goal of analyzing the optimal/pessimal expected running time in such a setting. We give polynomial time algorithms for 1-exit Recursive Markov decision processes (1-RMDPs) with positive rewards. Specifically, we show that the exact optimal value of both maximizing and minimizing 1-RMDPs with positive rewards can be computed in polynomial time (this value may be ∞). For two-player 1-RSSGs with positive rewards, we prove a “stackless and memoryless ” determinacy result, and show that deciding whether the game value is at least a given value r is in NP ∩ coNP. We also prove that a simultaneous strategy improvement algorithm converges to the value and optimal strategies for these stochastic games. We observe that 1-RSSG positive reward games are “harder ” than finite-state SSGs in several senses.

    Recursive Probabilistic Models: efficient analysis and implementation

    Get PDF
    This thesis examines Recursive Markov Chains (RMCs), their natural extensions and connection to other models. RMCs can model in a natural way probabilistic procedural programs and other systems that involve recursion and probability. An RMC is a set of ordinary finite state Markov Chains that are allowed to call each other recursively and it describes a potentially infinite, but countable, state ordinary Markov Chain. RMCs generalize in a precise sense several well studied probabilistic models in other domains such as natural language processing (Stochastic Context-Free Grammars), population dynamics (Multi-Type Branching Processes) and in queueing theory (Quasi-Birth-Death processes (QBDs)). In addition, RMCs can be extended to a controlled version called Recursive Markov Decision Processes (RMDPs) and also a game version referred to as Recursive (Simple) Stochastic Games (RSSGs). For analyzing RMCs, RMDPs, RSSGs we devised highly optimized numerical algorithms and implemented them in a tool called PReMo (Probabilistic Recursive Models analyzer). PReMo allows computation of the termination probability and expected termination time of RMCs and QBDs, and a restricted subset of RMDPs and RSSGs. The input models are described by the user in specifically designed simple input languages. Furthermore, in order to analyze the worst and best expected running time of probabilistic recursive programs we study models of RMDPs and RSSGs with positive rewards assigned to each of their transitions and provide new complexity upper and lower bounds of their analysis. We also establish some new connections between our models and models studied in queueing theory. Specifically, we show that (discrete time) QBDs can be described as a special subclass of RMCs and Tree-like QBDs, which are a generalization of QBDs, are equivalent to RMCs in a precise sense. We also prove that for a given QBD we can compute (in the unit cost RAM model) an approximation of its termination probabilities within i bits of precision in time polynomial in the size of the QBD and linear in i. Specifically, we show that we can do this using a decomposed Newton’s method

    Recursive Stochastic Games with Positive Rewards

    Get PDF
    We first show that in such games both players have optimal deterministic “stackless and memoryless” optimal strategies. We then provide polynomial-time algorithms for computing the exact optimal expected reward (which may be infinite, but is otherwise rational), and optimal strategies, for both the maximizing and minimizing single-player versions of the game, i.e., for (1-exit) Recursive Markov Decision Processes (1-RMDPs). It follows that the quantitative decision problem for positive reward 1-RSSGs is in NP ∩ coNP. We show that Condon's well-known quantitative termination problem for finite-state simple stochastic games (SSGs) which she showed to be in NP ∩ coNP reduces to a special case of the reward problem for 1-RSSGs, namely, deciding whether the value is ∞. By contrast, for finite-state SSGs with strictly positive rewards, deciding if this expected reward value is ∞ is solvable in P-time. We also show that there is a simultaneous strategy improvement algorithm that converges in a finite number of steps to the value and optimal strategies of a 1-RSSG with positive rewards

    Networked Occupancy Sensor System

    Get PDF
    Energy is often wasted on systems that are used to provide services such as light, heating, air conditioning and ventilation. If these services were intelligently controlled, there is potential for significant improvements in energy conservation. A system including room sensors, database, and webserver was designed, constructed, and implemented over the course of this project. Sensors report occupancy and light status and temperature. Real-time room data is available via the webserver and is archived in the database. The system is networked via Ethernet and powered using the power over Ethernet (802.3af) standard
    corecore