114 research outputs found

    La sfortuna di Brecht

    Get PDF

    Storytelling, Memory, Theatre

    Get PDF
    In the history of Western civilisation, the spread of writing, followed by the book, obviously did not entirely replace oral culture and communication, but led to the development of a dialectic relationship, especially in the sense that memory underwent a gradual shift away from the human mind, where it tended to limit itself to recalling necessary notions and facts stored in documents, books, and, more recently, in audiovisual recordings and electronic databases. The article foregrounds the most important aspects of this process by means of a series of especially significant examples in the relationship between words and other means of expression and communication by the human body.

    Be greedy and learn: efficient and certified algorithms for parametrized optimal control problems

    Full text link
    We consider parametrized linear-quadratic optimal control problems and provide their online-efficient solutions by combining greedy reduced basis methods and machine learning algorithms. To this end, we first extend the greedy control algorithm, which builds a reduced basis for the manifold of optimal final time adjoint states, to the setting where the objective functional consists of a penalty term measuring the deviation from a desired state and a term describing the control energy. Afterwards, we apply machine learning surrogates to accelerate the online evaluation of the reduced model. The error estimates proven for the greedy procedure are further transferred to the machine learning models and thus allow for efficient a posteriori error certification. We discuss the computational costs of all considered methods in detail and show by means of two numerical examples the tremendous potential of the proposed methodology

    Storytelling, Memory, Theatre

    Get PDF
    In the history of Western civilisation, the spread of writing, followed by the book, obviously did not entirely replace oral culture and communication, but led to the development of a dialectic relationship, especially in the sense that memory underwent a gradual shift away from the human mind, where it tended to limit itself to recalling necessary notions and facts stored in documents, books, and, more recently, in audiovisual recordings and electronic databases. The article foregrounds the most important aspects of this process by means of a series of especially significant examples in the relationship between words and other means of expression and communication by the human body.

    Generalized Conditional Gradient with Augmented Lagrangian for Composite Minimization

    Get PDF
    In this paper we propose a splitting scheme which hybridizes generalized conditional gradient with a prox-imal step which we call CGALP algorithm, for minimizing the sum of three proper convex and lower-semicontinuous functions in real Hilbert spaces. The minimization is subject to an affine constraint, that allows in particular to deal with composite problems (sum of more than three functions) in a separate way by the usual product space technique. While classical conditional gradient methods require Lipschitz-continuity of the gradient of the differentiable part of the objective, CGALP needs only differentiability (on an appropriate subset), hence circumventing the intricate question of Lipschitz continuity of gradients. For the two remaining functions in the objective, we do not require any additional regularity assumption. The second function, possibly nonsmooth, is assumed simple, i.e., the associated proximal mapping is easily computable. For the third function, again nonsmooth, we just assume that its domain is weakly compact and that a linearly perturbed minimization oracle is accessible. In particular, this last function can be chosen to be the indicator of a nonempty bounded closed convex set, in order to deal with additional constraints. Finally, the affine constraint is addressed by the augmented Lagrangian approach. Our analysis is carried out for a wide choice of algorithm parameters satisfying so called "open loop" rules. As main results, under mild conditions, we show asymptotic feasibility with respect to the affine constraint, boundedness of the dual multipliers, and convergence of the Lagrangian values to the saddle-point optimal value. We also provide (subsequential) rates of convergence for both the feasibility gap and the Lagrangian values.Dans ce travail, nous proposons un schéma d’éclatement en optimisation non lisse, hybridant le gradient conditionnel avec une étapeproximale que nous appelons CGALP , pour minimiser la somme de fonctions propres fermées et convexes sur un compact de R n . La minimisationest de plus sujette à une contrainte affine, que nous prenons en compte par un Lagrangien augmenté, en qui permet en particulier de traiter desproblèmes composites à plusieurs fonctions par une technique d’espace produit. Certaines fonctions sont autorisées à être non lisses mais dontl’opérateur proximal est simple à calculer. Notre analyse et garanties de convergence sont assurées pour un large choix de paramètres "en boucleouverte". Comme résultats principaux, nous montrons la faisabilité asymptotique de la variable primale, la convergence de toute sous-suite versune solution du problème primal, la convergence de la variable duale à une solution du problème dual, et la convergence du Lagrangien. Des tauxde convergence sont aussi fournis. Les implications et illustrations de l’algorithme en traitement des données sont discutées

    Regularization Properties of Dual Subgradient Flow

    Get PDF
    Dual gradient descent combined with early stopping represents an efficient alternative to the Tikhonov variational approach when the regularizer is strongly convex. However, for many relevant applications, it is crucial to deal with regularizers which are only convex. In this setting, the dual problem is non smooth, and dual gradient descent cannot be used. In this paper, we study the regularization properties of a subgradient dual flow, and we show that the proposed procedure achieves the same recovery accuracy as penalization methods, while being more efficient from the computational perspective

    Gradient conditionnel généralisé et lagrangien augmenté pour la minimisation composite

    Get PDF
    National audienceDans ce travail, nous proposons un schéma d'éclatement en optimisation non lisse, hybridant le gradient conditionnel avec une étape proximale que nous appelons CGALP, pour minimiser la somme de fonctions propres fermées et convexes sur un compact de Rn\mathbb{R}^n. La minimisation est de plus sujette à une contrainte affine, que nous prenons en compte par un Lagrangien augmenté, en qui permet en particulier de traiter des problèmes composites à plusieurs fonctions par une technique d'espace produit. Certaines fonctions sont autorisées à être non lisses mais dont l'opérateur proximal est simple à calculer. Notre analyse et garanties de convergence sont assurées pour un large choix de paramètres en boucle ouverte. Comme résultats principaux, nous montrons la faisabilité asymptotique de la variable primale, la convergence de toute sous-suite vers une solution du problème primal, la convergence de la variable duale à une solution du problème dual, et la convergence du Lagrangien. Des taux de convergence sont aussi fournis. Les implications et illustrations de l'algorithme en traitement des données sont discutées

    Zeroth order optimization with orthogonal random directions

    Get PDF
    We propose and analyze a randomized zeroth-order approach based on approximating the exact gradient by finite differences computed in a set of orthogonal random directions that changes with each iteration. A number of previously proposed methods are recovered as special cases including spherical smoothing, coordinat edescent, as well as discretized gradient descent. Our main contribution is proving convergence guarantees as well as convergence rates under different parameter choices and assumptions. In particular, we consider convex objectives, but also possibly non-convex objectives satisfying thePolyak-Łojasiewicz (PL) condition. Theoretical results are complemented and illustrated by numerical experiments
    corecore