339 research outputs found

    Nonnegative Rank Measures and Monotone Algebraic Branching Programs

    Get PDF
    Inspired by Nisan\u27s characterization of noncommutative complexity (Nisan 1991), we study different notions of nonnegative rank, associated complexity measures and their link with monotone computations. In particular we answer negatively an open question of Nisan asking whether nonnegative rank characterizes monotone noncommutative complexity for algebraic branching programs. We also prove a rather tight lower bound for the computation of elementary symmetric polynomials by algebraic branching programs in the monotone setting or, equivalently, in the homogeneous syntactically multilinear setting

    Algebraic Branching Programs, Border Complexity, and Tangent Spaces

    Get PDF
    Nisan showed in 1991 that the width of a smallest noncommutative single-(source,sink) algebraic branching program (ABP) to compute a noncommutative polynomial is given by the ranks of specific matrices. This means that the set of noncommutative polynomials with ABP width complexity at most kk is Zariski-closed, an important property in geometric complexity theory. It follows that approximations cannot help to reduce the required ABP width. It was mentioned by Forbes that this result would probably break when going from single-(source,sink) ABPs to trace ABPs. We prove that this is correct. Moreover, we study the commutative monotone setting and prove a result similar to Nisan, but concerning the analytic closure. We observe the same behavior here: The set of polynomials with ABP width complexity at most kk is closed for single-(source,sink) ABPs and not closed for trace ABPs. The proofs reveal an intriguing connection between tangent spaces and the vector space of flows on the ABP. We close with additional observations on VQP and the closure of VNP which allows us to establish a separation between the two classes

    Variety Membership Testing in Algebraic Complexity Theory

    Get PDF
    In this thesis, we study some of the central problems in algebraic complexity theory through the lens of the variety membership testing problem. In the first part, we investigate whether separations between algebraic complexity classes can be phrased as instances of the variety membership testing problem. For this, we compare some complexity classes with their closures. We show that monotone commutative single-(source, sink) ABPs are closed. Further, we prove that multi-(source, sink) ABPs are not closed in both the monotone commutative and the noncommutative settings. However, the corresponding complexity classes are closed in all these settings. Next, we observe a separation between the complexity class VQP and the closure of VNP. In the second part, we cover the blackbox polynomial identity testing (PIT) problem, and the rank computation problem of symbolic matrices, both phrasable as instances of the variety membership testing problem. For the blackbox PIT, we give a randomized polynomial time algorithm that uses the number of random bits that matches the information-theoretic lower bound, differing from it only in the lower order terms. For the rank computation problem, we give a deterministic polynomial time approximation scheme (PTAS) when the degrees of the entries of the matrices are bounded by a constant. Finally, we show NP-hardness of two problems on 3-tensors, both of which are instances of the variety membership testing problem. The first problem is the orbit closure containment problem for the action of GLk x GLm x GLn on 3-tensors, while the second problem is to decide whether the slice rank of a given 3-tensor is at most r

    IST Austria Technical Report

    Get PDF
    We consider the problem of expected cost analysis over nondeterministic probabilistic programs, which aims at automated methods for analyzing the resource-usage of such programs. Previous approaches for this problem could only handle nonnegative bounded costs. However, in many scenarios, such as queuing networks or analysis of cryptocurrency protocols, both positive and negative costs are necessary and the costs are unbounded as well. In this work, we present a sound and efficient approach to obtain polynomial bounds on the expected accumulated cost of nondeterministic probabilistic programs. Our approach can handle (a) general positive and negative costs with bounded updates in variables; and (b) nonnegative costs with general updates to variables. We show that several natural examples which could not be handled by previous approaches are captured in our framework. Moreover, our approach leads to an efficient polynomial-time algorithm, while no previous approach for cost analysis of probabilistic programs could guarantee polynomial runtime. Finally, we show the effectiveness of our approach by presenting experimental results on a variety of programs, motivated by real-world applications, for which we efficiently synthesize tight resource-usage bounds

    Ranking and Repulsing Supermartingales for Reachability in Probabilistic Programs

    Full text link
    Computing reachability probabilities is a fundamental problem in the analysis of probabilistic programs. This paper aims at a comprehensive and comparative account on various martingale-based methods for over- and under-approximating reachability probabilities. Based on the existing works that stretch across different communities (formal verification, control theory, etc.), we offer a unifying account. In particular, we emphasize the role of order-theoretic fixed points---a classic topic in computer science---in the analysis of probabilistic programs. This leads us to two new martingale-based techniques, too. We give rigorous proofs for their soundness and completeness. We also make an experimental comparison using our implementation of template-based synthesis algorithms for those martingales

    Contributions to the moment-SOS approach in global polynomial optimization

    Get PDF
    L''Optimisation Polynomiale' s'intéresse aux problèmes d'optimisation P de la forme min {f(x): x dans K} où f est un polynôme et K est un ensemble semi-algébrique de base, c'est-à-dire défini par un nombre fini de contraintes inégalité polynomiales, K={x dans Rn : gj(x) <= 0}. Cette sous discipline de l'optimisation a émergé dans la dernière décennie grâce à la combinaison de deux facteurs: l'existence de certains résultats puissants de géométrie algébrique réelle et la puissance de l'optimisation semidéfinie (qui permet d'exploiter les premiers). Il en a résulté une méthodologie générale (que nous appelons ``moments-SOS') qui permet d'approcher aussi près que l'on veut l'optimum global de P en résolvant une hiérarchie de relaxations convexes. Cependant, chaque relaxation étant un programme semi-défini dont la taille augmente avec le rang dans la hiérarchie, malheureusement, au vu de l'état de l'art actuel des progiciels de programmation semidéfinie, cette méthodologie est pour l'instant limitée à des problèmes P de taille modeste sauf si des symétries ou de la parcimonie sont présentes dans la définition de P. Cette thèse essaie donc de répondre à la question: Peux-t-on quand même utiliser la méthodologie moments-SOS pour aider à résoudre P même si on ne peut résoudre que quelques (voire une seule) relaxations de la hiérarchie? Et si oui, comment? Nous apportons deux contributions: I. Dans une première contribution nous considérons les problèmes non convexes en variables mixtes (MINLP) pour lesquelles dans les contraintes polynomiales {g(x) <=0} où le polynôme g n'est pas concave, g est concerné par peu de variables. Pour résoudre de tels problèmes (de taille est relativement importante) on utilise en général des méthodes de type ``Branch-and-Bound'. En particulier, pour des raisons d'efficacité évidentes, à chaque nœud de l'arbre de recherche on doit calculer rapidement une borne inférieure sur l'optimum global. Pour ce faire on utilise des relaxations convexes du problème obtenues grâce à l'utilisation de sous estimateurs convexes du critère f (et des polynômes g pour les contraintes g(x)<= 0 non convexes). Notre contribution est de fournir une méthodologie générale d'obtention de tels sous estimateurs polynomiaux convexes pour tout polynôme g, sur une boite. La nouveauté de notre contribution (grâce à la méthodologie moment-SOS) est de pouvoir minimiser directement le critère d'erreur naturel qui mesure la norme L_1 de la différence f-f' entre f et son sous estimateur convexe polynomial f'. Les résultats expérimentaux confirment que le sous estimateur convexe polynomial que nous obtenons est nettement meilleur que ceux obtenus par des méthodes classiques de type ``alpha-BB' et leurs variantes, tant du point de vue du critère L_1 que du point de vue de la qualité des bornes inférieures obtenus quand on minimise f' (au lieu de f) sur la boite. II: Dans une deuxième contribution on considère des problèmes P pour lesquels seules quelques relaxations de la hiérarchie moments-SOS peuvent être implantées, par exemple celle de rang k dans la hiérarchie, et on utilise la solution de cette relaxation pour construire une solution admissible de P. Cette idée a déjà été exploitée pour certains problèmes combinatoire en variables 0/1, parfois avec des garanties de performance remarquables (par exemple pour le problème MAXCUT). Nous utilisons des résultats récents de l'approche moment-SOS en programmation polynomiale paramétrique pour définir un algorithme qui calcule une solution admissible pour P à partir d'une modification mineure de la relaxation convexe d'ordre k. L'idée de base est de considérer la variable x_1 comme un paramètre dans un intervalle Y_1 de R et on approxime la fonction ``valeur optimale' J(y) du problème d'optimisation paramétrique P(y)= min {f(x): x dans K; x_1=y} par un polynôme univarié de degré d fixé. Cette étape se ramène à la résolution d'un problème d'optimisation convexe (programme semidéfini). On calcule un minimiseur global y de J sur l'intervalle Y (un problème d'optimisation convexe ``facile') et on fixe la variable x_1=y. On itère ensuite sur les variables restantes x_2,...,x_n en prenant x_2 comme paramètre dans un intervalle Y_2, etc. jusqu'à obtenir une solution complète x de R^n qui est faisable si K est convexe ou dans certains problèmes en variables 0/1 où la faisabilité est facile à vérifier (e.g., MAXCUT, k-CLUSTTER, Knapsack). Sinon on utilise le point obtenu x comme initialisation dans un procédure d'optimisation locale pour obtenir une solution admissible. Les résultats expérimentaux obtenus sur de nombreux exemples sont très encourageants et prometteurs.Polynomial Optimization is concerned with optimization problems of the form (P) : f* = { f(x) with x in set K}, where K is a basic semi-algebraic set in Rn defined by K={x in Rn such as gj(x) less or equal 0}; and f is a real polynomial of n variables x = (x1, x2, ..., xn). In this thesis we are interested in problems (P) where symmetries and/or structured sparsity are not easy to detect or to exploit, and where only a few (or even no) semidefinite relaxations of the moment-SOS approach can be implemented. And the issue we investigate is: How can the moment-SOS methodology be still used to help solve such problem (P)? We provide two applications of the moment-SOS approach to help solve (P) in two different contexts. * In a first contribution we consider MINLP problems on a box B = [xL, xU] of Rn and propose a moment-SOS approach to construct polynomial convex underestimators for the objective function f (if non convex) and for -gj if in the constraint gj(x) less or equal 0, the polynomial gj is not concave. We work in the context where one wishes to find a convex underestimator of a non-convex polynomial f of a few variables on a box B of Rn. The novelty with previous works on this topic is that we want to compute a polynomial convex underestimator p of f that minimizes the important tightness criterion which is the L1 norm of (f-h) on B, over all convex polynomials h of degree d _fixed. Indeed in previous works for computing a convex underestimator L of f, this tightness criterion is not taken into account directly. It turns out that the moment-SOS approach is well suited to compute a polynomial convex underestimator p that minimizes the tightness criterion and numerical experiments on a sample of non-trivial examples show that p outperforms L not only with respect to the tightness score but also in terms of the resulting lower bounds obtained by minimizing respectively p and L on B. Similar improvements also occur when we use the moment-SOS underestimator instead of the aBB-one in refinements of the aBB method. * In a second contribution we propose an algorithm that also uses an optimal solution of a semidefinite relaxation in the moment-SOS hierarchy (in fact a slight modification) to provide a feasible solution for the initial optimization problem but with no rounding procedure. In the present context, we treat the first variable x1 of x = (x1, x2, ...., xn) as a parameter in some bounded interval Y of R. Notice that f*=min { J(y) : y in Y} where J is the function J(y) := inf {f(x) : x in K ; x1=y}. That is one has reduced the original n-dimensional optimization problem (P) to an equivalent one-dimensional optimization problem on an interval. But of course determining the optimal value function J is even more complicated than (P) as one has to determine a function (instead of a point in Rn), an infinite-dimensional problem. But the idea is to approximate J(y) on Y by a univariate polynomial p(y) with the degree d and fortunately, computing such a univariate polynomial is possible via solving a semidefinite relaxation associated with the parameter optimization problem. The degree d of p(y) is related to the size of this semidefinite relaxation. The higher the degree d is, the better is the approximation of J(y) by p(y) and in fact, one may show that p(y) converges to J(y) in a strong sense on Y as d increases. But of course the resulting semidefinite relaxation becomes harder (or impossible) to solve as d increases and so in practice d is fixed to a small value. Once the univariate polynomial p(y) has been determined, one computes x1* in Y that minimizes p(y) on Y, a convex optimization problem that can be solved efficiently. The process is iterated to compute x2 in a similar manner, and so on, until a point x in Rn has been computed. Finally, as x* is not feasible in general, we then use x* as a starting point for a local optimization procedure to find a final feasible point x in K. When K is convex, the following variant is implemented. After having computed x1* as indicated, x2* is computed with x1 fixed at the value x1*, and x3 is computed with x1 and x2 fixed at the values x1* and x2* respectively, etc., so that the resulting point x* is feasible, i.e., x* in K. The same variant applies for 0/1 programs for which feasibility is easy to detect like e.g., for MAXCUT, k-CLUSTER or 0/1-KNAPSACK problems
    • …
    corecore