288 research outputs found

    Inexact Fixed-Point Proximity Algorithms for Nonsmooth Convex Optimization

    Get PDF
    The aim of this dissertation is to develop efficient inexact fixed-point proximity algorithms with convergence guaranteed for nonsmooth convex optimization problems encountered in data science. Nonsmooth convex optimization is one of the core methodologies in data science to acquire knowledge from real-world data and has wide applications in various fields, including signal/image processing, machine learning and distributed computing. In particular, in the context of image reconstruction, compressed sensing and sparse machine learning, either the objective functions or the constraints of the modeling optimization problems are nondifferentiable. Hence, traditional methods such as the gradient descent method and the Newton method are not applicable since gradients of the objective functions or the constraints do not exist. Fixed-point proximity algorithms were developed via subdifferentials of the objective function to address the challenges. The theory of nonexpansive averaged operators was successfully employed in the existing analysis of exact/inexact fixed-point proximity algorithms for nonsmooth convex optimization. However, this framework has imposed restricted constraints on the algorithm formulation, which slows down the convergence and conceals relations between different algorithms. In this work, we characterize the solutions of convex optimization as fixed-points of certain operators, and then adopt the matrix splitting technique to obtain a framework of fully implicit fixed-point proximity algorithms. This results in a new class of quasiaveraged operators, which extends the class of nonexpansive averaged operators. Such framework covers and generalizes most of the existing popular algorithms for nonsmooth convex optimization. To deal with the implicitness of this framework, we follow the inspiration of the Schur’s lemma on the uniform boundedness of infinite matrices and propose a framework of inexact fixed-point iterations of quasiaveraged operators. This framework generalizes the inexact iterations of nonexpansive averaged operators. A combination of the frameworks of inexact fixed-point iterations and the implicit fixed-point proximity algorithms leads to the framework of inexact fixed-point proximity algorithms, which further extends existing methods for nonsmooth convex optimization. Numerical experiments on image deblurring problems demonstrate the advantages of inexact fixed-point proximity algorithms over existing explicit algorithms

    The Asymmetric Maximum Margin Bias of Quasi-Homogeneous Neural Networks

    Full text link
    In this work, we explore the maximum-margin bias of quasi-homogeneous neural networks trained with gradient flow on an exponential loss and past a point of separability. We introduce the class of quasi-homogeneous models, which is expressive enough to describe nearly all neural networks with homogeneous activations, even those with biases, residual connections, and normalization layers, while structured enough to enable geometric analysis of its gradient dynamics. Using this analysis, we generalize the existing results of maximum-margin bias for homogeneous networks to this richer class of models. We find that gradient flow implicitly favors a subset of the parameters, unlike in the case of a homogeneous model where all parameters are treated equally. We demonstrate through simple examples how this strong favoritism toward minimizing an asymmetric norm can degrade the robustness of quasi-homogeneous models. On the other hand, we conjecture that this norm-minimization discards, when possible, unnecessary higher-order parameters, reducing the model to a sparser parameterization. Lastly, by applying our theorem to sufficiently expressive neural networks with normalization layers, we reveal a universal mechanism behind the empirical phenomenon of Neural Collapse.Comment: 33 pages, 5 figure

    COARSE-EMOA: An indicator-based evolutionary algorithm for solving equality constrained multi-objective optimization problems

    Get PDF
    Many real-world applications involve dealing with several conflicting objectives which need to be optimized simultaneously. Moreover, these problems may require the consideration of limitations that restrict their decision variable space. Evolutionary Algorithms (EAs) are capable of tackling Multi-objective Optimization Problems (MOPs). However, these approaches struggle to accurately approximate a feasible solution when considering equality constraints as part of the problem due to the inability of EAs to find and keep solutions exactly at the constraint boundaries. Here, we present an indicator-based evolutionary multi-objective optimization algorithm (EMOA) for tackling Equality Constrained MOPs (ECMOPs). In our proposal, we adopt an artificially constructed reference set closely resembling the feasible Pareto front of an ECMOP to calculate the Inverted Generational Distance of a population, which is then used as a density estimator. An empirical study over a set of benchmark problems each of which contains at least one equality constraint was performed to test the capabilities of our proposed COnstrAined Reference SEt - EMOA (COARSE-EMOA). Our results are compared to those obtained by six other EMOAs. As will be shown, our proposed COARSE-EMOA can properly approximate a feasible solution by guiding the search through the use of an artificially constructed set that approximates the feasible Pareto front of a given problem
    • …
    corecore