837 research outputs found

    A Generalized Newton Method for Subgradient Systems

    Full text link
    This paper proposes and develops a new Newton-type algorithm to solve subdifferential inclusions defined by subgradients of extended-real-valued prox-regular functions. The proposed algorithm is formulated in terms of the second-order subdifferential of such functions that enjoys extensive calculus rules and can be efficiently computed for broad classes of extended-real-valued functions. Based on this and on metric regularity and subregularity properties of subgradient mappings, we establish verifiable conditions ensuring well-posedness of the proposed algorithm and its local superlinear convergence. The obtained results are also new for the class of equations defined by continuously differentiable functions with Lipschitzian derivatives (C1,1\mathcal{C}^{1,1} functions), which is the underlying case of our consideration. The developed algorithm for prox-regular functions is formulated in terms of proximal mappings related to and reduces to Moreau envelopes. Besides numerous illustrative examples and comparison with known algorithms for C1,1\mathcal{C}^{1,1} functions and generalized equations, the paper presents applications of the proposed algorithm to the practically important class of Lasso problems arising in statistics and machine learning.Comment: 35 page

    More Than 1700 Years of Word Equations

    Full text link
    Geometry and Diophantine equations have been ever-present in mathematics. Diophantus of Alexandria was born in the 3rd century (as far as we know), but a systematic mathematical study of word equations began only in the 20th century. So, the title of the present article does not seem to be justified at all. However, a linear Diophantine equation can be viewed as a special case of a system of word equations over a unary alphabet, and, more importantly, a word equation can be viewed as a special case of a Diophantine equation. Hence, the problem WordEquations: "Is a given word equation solvable?" is intimately related to Hilbert's 10th problem on the solvability of Diophantine equations. This became clear to the Russian school of mathematics at the latest in the mid 1960s, after which a systematic study of that relation began. Here, we review some recent developments which led to an amazingly simple decision procedure for WordEquations, and to the description of the set of all solutions as an EDT0L language.Comment: The paper will appear as an invited address in the LNCS proceedings of CAI 2015, Stuttgart, Germany, September 1 - 4, 201

    Groups with context-free co-word problem

    Get PDF
    The class of co-context-free groups is studied. A co-context-free group is defined as one whose coword problem (the complement of its word problem) is context-free. This class is larger than the subclass of context-free groups, being closed under the taking of finite direct products, restricted standard wreath products with context-free top groups, and passing to finitely generated subgroups and finite index overgroups. No other examples of co-context-free groups are known. It is proved that the only examples amongst polycyclic groups or the Baumslag–Solitar groups are virtually abelian. This is done by proving that languages with certain purely arithmetical properties cannot be context-free; this result may be of independent interest

    Unification in the union of disjoint equational theories : combining decision procedures

    Get PDF
    Most of the work on the combination of unification algorithms for the union of disjoint equational theories has been restricted to algorithms which compute finite complete sets of unifiers. Thus the developed combination methods usually cannot be used to combine decision procedures, i.e., algorithms which just decide solvability of unification problems without computing unifiers. In this paper we describe a combination algorithm for decision procedures which works for arbitrary equational theories, provided that solvability of so-called unification problems with constant restrictions--a slight generalization of unification problems with constants--is decidable for these theories. As a consequence of this new method, we can for example show that general A-unifiability, i.e., solvability of A-unification problems with free function symbols, is decidable. Here A stands for the equational theory of one associative function symbol. Our method can also be used to combine algorithms which compute finite complete sets of unifiers. Manfred Schmidt-Schauß\u27 combination result, the until now most general result in this direction, can be obtained as a consequence of this fact. We also get the new result that unification in the union of disjoint equational theories is finitary, if general unification--i.e., unification of terms with additional free function symbols--is finitary in the single theories

    On the existence of optimal multi-valued decoders and their accuracy bounds for undersampled inverse problems

    Full text link
    Undersampled inverse problems occur everywhere in the sciences including medical imaging, radar, astronomy etc., yielding underdetermined linear or non-linear reconstruction problems. There are now a myriad of techniques to design decoders that can tackle such problems, ranging from optimization based approaches, such as compressed sensing, to deep learning (DL), and variants in between the two techniques. The variety of methods begs for a unifying approach to determine the existence of optimal decoders and fundamental accuracy bounds, in order to facilitate a theoretical and empirical understanding of the performance of existing and future methods. Such a theory must allow for both single-valued and multi-valued decoders, as underdetermined inverse problems typically have multiple solutions. Indeed, multi-valued decoders arise due to non-uniqueness of minimizers in optimisation problems, such as in compressed sensing, and for DL based decoders in generative adversarial models, such as diffusion models and ensemble models. In this work we provide a framework for assessing the lowest possible reconstruction accuracy in terms of worst- and average-case errors. The universal bounds bounds only depend on the measurement model FF, the model class M1⊆X\mathcal{M}_1 \subseteq \mathcal{X} and the noise model E\mathcal{E}. For linear FF these bounds depend on its kernel, and in the non-linear case the concept of kernel is generalized for undersampled settings. Additionally, we provide multi-valued variational solutions that obtain the lowest possible reconstruction error
    • 

    corecore