16 research outputs found

    Neurodynamic Optimization: towards Nonconvexity

    Get PDF

    A neurodynamic optimization approach to constrained pseudoconvex optimization.

    Get PDF
    Guo, Zhishan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 71-82).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement i --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1Chapter 1.2 --- Recurrent Neural Networks --- p.4Chapter 1.3 --- Thesis Organization --- p.7Chapter 2 --- Literature Review --- p.8Chapter 2.1 --- Pseudo convex Optimization --- p.8Chapter 2.2 --- Recurrent Neural Networks --- p.10Chapter 3 --- Model Description and Convergence Analysis --- p.17Chapter 3.1 --- Model Descriptions --- p.18Chapter 3.2 --- Global Convergence --- p.20Chapter 4 --- Numerical Examples --- p.27Chapter 4.1 --- Gaussian Optimization --- p.28Chapter 4.2 --- Quadratic Fractional Programming --- p.36Chapter 4.3 --- Nonlinear Convex Programming --- p.39Chapter 5 --- Real-time Data Reconciliation --- p.42Chapter 5.1 --- Introduction --- p.42Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44Chapter 5.3 --- Examples --- p.45Chapter 6 --- Real-time Portfolio Optimization --- p.53Chapter 6.1 --- Introduction --- p.53Chapter 6.2 --- Model Description --- p.54Chapter 6.3 --- Theoretical Analysis --- p.56Chapter 6.4 --- Illustrative Examples --- p.58Chapter 7 --- Conclusions and Future Works --- p.67Chapter 7.1 --- Concluding Remarks --- p.67Chapter 7.2 --- Future Works --- p.68Chapter A --- Publication List --- p.69Bibliography --- p.7

    Self-adaptive algorithms for quasiconvex programming and applications to machine learning

    Full text link
    For solving a broad class of nonconvex programming problems on an unbounded constraint set, we provide a self-adaptive step-size strategy that does not include line-search techniques and establishes the convergence of a generic approach under mild assumptions. Specifically, the objective function may not satisfy the convexity condition. Unlike descent line-search algorithms, it does not need a known Lipschitz constant to figure out how big the first step should be. The crucial feature of this process is the steady reduction of the step size until a certain condition is fulfilled. In particular, it can provide a new gradient projection approach to optimization problems with an unbounded constrained set. The correctness of the proposed method is verified by preliminary results from some computational examples. To demonstrate the effectiveness of the proposed technique for large-scale problems, we apply it to some experiments on machine learning, such as supervised feature selection, multi-variable logistic regressions and neural networks for classification

    A neurodynamic approach for a class of pseudoconvex semivectorial bilevel optimization problem

    Full text link
    The article proposes an exact approach to find the global solution of a nonconvex semivectorial bilevel optimization problem, where the objective functions at each level are pseudoconvex, and the constraints are quasiconvex. Due to its non-convexity, this problem is challenging, but it attracts more and more interest because of its practical applications. The algorithm is developed based on monotonic optimization combined with a recent neurodynamic approach, where the solution set of the lower-level problem is inner approximated by copolyblocks in outcome space. From that, the upper-level problem is solved using the branch-and-bound method. Finding the bounds is converted to pseudoconvex programming problems, which are solved using the neurodynamic method. The algorithm's convergence is proved, and computational experiments are implemented to demonstrate the accuracy of the proposed approach

    A Framework for Controllable Pareto Front Learning with Completed Scalarization Functions and its Applications

    Full text link
    Pareto Front Learning (PFL) was recently introduced as an efficient method for approximating the entire Pareto front, the set of all optimal solutions to a Multi-Objective Optimization (MOO) problem. In the previous work, the mapping between a preference vector and a Pareto optimal solution is still ambiguous, rendering its results. This study demonstrates the convergence and completion aspects of solving MOO with pseudoconvex scalarization functions and combines them into Hypernetwork in order to offer a comprehensive framework for PFL, called Controllable Pareto Front Learning. Extensive experiments demonstrate that our approach is highly accurate and significantly less computationally expensive than prior methods in term of inference time.Comment: Under Review at Neural Networks Journa

    Robust Linear Neural Network for Constrained Quadratic Optimization

    Get PDF
    Based on the feature of projection operator under box constraint, by using convex analysis method, this paper proposed three robust linear systems to solve a class of quadratic optimization problems. Utilizing linear matrix inequality (LMI) technique, eigenvalue perturbation theory, Lyapunov-Razumikhin method, and LaSalle’s invariance principle, some stable criteria for the related models are also established. Compared with previous criteria derived in the literature cited herein, the stable criteria established in this paper are less conservative and more practicable. Finally, a numerical simulation example and an application example in compressed sensing problem are also given to illustrate the validity of the criteria established in this paper

    A Recurrent Neural Network for Solving a Class of General Variational Inequalities

    Full text link
    corecore