74 research outputs found

    Expansions of the real field by open sets: definability versus interpretability

    Get PDF
    An open set U of the real numbers R is produced such that the expansion (R,+,x,U) of the real field by U defines a Borel isomorph of (R,+,x,N) but does not define N. It follows that (R,+,x,U) defines sets in every level of the projective hierarchy but does not define all projective sets. This result is elaborated in various ways that involve geometric measure theory and working over o-minimal expansions of (R,+,x). In particular, there is a Cantor subset K of R such that for every exponentially bounded o-minimal expansion M of (R,+,x), every subset of R definable in (M,K) either has interior or is Hausdorff null.Comment: 14 page

    Learning filter functions in regularisers by minimising quotients

    Get PDF
    Learning approaches have recently become very popular in the field of inverse problems. A large variety of methods has been established in recent years, ranging from bi-level learning to high-dimensional machine learning techniques. Most learning approaches, however, only aim at fitting parametrised models to favourable training data whilst ignoring misfit training data completely. In this paper, we follow up on the idea of learning parametrised regularisation functions by quotient minimisation as established in [3]. We extend the model therein to include higher-dimensional filter functions to be learned and allow for fit- and misfit-training data consisting of multiple functions. We first present results resembling behaviour of well-established derivative-based sparse regularisers like total variation or higher-order total variation in one-dimension. Our second and main contribution is the introduction of novel families of non-derivative-based regularisers. This is accomplished by learning favourable scales and geometric properties while at the same time avoiding unfavourable ones

    Denjoy-Carleman differentiable perturbation of polynomials and unbounded operators

    Full text link
    Let tA(t)t\mapsto A(t) for tTt\in T be a CMC^M-mapping with values unbounded operators with compact resolvents and common domain of definition which are self-adjoint or normal. Here CMC^M stands for C^\om (real analytic), a quasianalytic or non-quasianalytic Denjoy-Carleman class, CC^\infty, or a H\"older continuity class C^{0,\al}. The parameter domain TT is either R\mathbb R or Rn\mathbb R^n or an infinite dimensional convenient vector space. We prove and review results on CMC^M-dependence on tt of the eigenvalues and eigenvectors of A(t)A(t).Comment: 8 page

    Upper and Lower Bounds on Sizes of Finite Bisimulations of Pfaffian Dynamical Systems

    Get PDF
    In this paper we study a class of dynamical systems defined by Pfaffian maps. It is a sub-class of o-minimal dynamical systems which capture rich continuous dynamics and yet can be studied using finite bisimulations. The existence of finite bisimulations for o-minimal dynamical and hybrid systems has been shown by several authors; see e.g. Brihaye et al (2004), Davoren (1999), Lafferriere et al (2000). The next natural question to investigate is how the sizes of such bisimulations can be bounded. The first step in this direction was done by Korovina et al (2004) where a double exponential upper bound was shown for Pfaffian dynamical and hybrid systems. In the present paper we improve this bound to a single exponential upper bound. Moreover we show that this bound is tight in general, by exhibiting a parameterized class of systems on which the exponential bound is attained. The bounds provide a basis for designing efficient algorithms for computing bisimulations, solving reachability and motion planning problems

    From error bounds to the complexity of first-order descent methods for convex functions

    Get PDF
    This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex functions having a moderately flat profile near the set of minimizers (as those of functions with H\"olderian growth). A counterexample shows that the equivalence is no longer true for extremely flat functions. This fact reveals the relevance of an approach based on KL inequality. In a second stage, we show how KL inequalities can in turn be employed to compute new complexity bounds for a wealth of descent methods for convex problems. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence in the spirit of the famous majorant method of Kantorovich. Our result applies to a very simple abstract scheme that covers a wide class of descent methods. As a byproduct of our study, we also provide new results for the globalization of KL inequalities in the convex framework. Our main results inaugurate a simple methodology: derive an error bound, compute the desingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Our method is illustrated through projection methods for feasibility problems, and through the famous iterative shrinkage thresholding algorithm (ISTA), for which we show that the complexity bound is of the form O(qk)O(q^{k}) where the constituents of the bound only depend on error bound constants obtained for an arbitrary least squares objective with 1\ell^1 regularization

    On the stable equilibrium points of gradient systems

    No full text
    This paper studies the relations between the local minima of a cost function f and the stable equilibria of the gradient descent flow of f. In particular, it is shown that, under the assumption that f is real analytic, local minimality is necessary and sufficient for stability. Under the weaker assumption that f is indefinitely continuously differentiable, local minimality is neither necessary nor sufficient for stability. (c) 2006 Elsevier B.V. All rights reserved
    corecore