30 research outputs found

    A neurodynamic approach for a class of pseudoconvex semivectorial bilevel optimization problem

    Full text link
    The article proposes an exact approach to find the global solution of a nonconvex semivectorial bilevel optimization problem, where the objective functions at each level are pseudoconvex, and the constraints are quasiconvex. Due to its non-convexity, this problem is challenging, but it attracts more and more interest because of its practical applications. The algorithm is developed based on monotonic optimization combined with a recent neurodynamic approach, where the solution set of the lower-level problem is inner approximated by copolyblocks in outcome space. From that, the upper-level problem is solved using the branch-and-bound method. Finding the bounds is converted to pseudoconvex programming problems, which are solved using the neurodynamic method. The algorithm's convergence is proved, and computational experiments are implemented to demonstrate the accuracy of the proposed approach

    Self-adaptive algorithms for quasiconvex programming and applications to machine learning

    Full text link
    For solving a broad class of nonconvex programming problems on an unbounded constraint set, we provide a self-adaptive step-size strategy that does not include line-search techniques and establishes the convergence of a generic approach under mild assumptions. Specifically, the objective function may not satisfy the convexity condition. Unlike descent line-search algorithms, it does not need a known Lipschitz constant to figure out how big the first step should be. The crucial feature of this process is the steady reduction of the step size until a certain condition is fulfilled. In particular, it can provide a new gradient projection approach to optimization problems with an unbounded constrained set. The correctness of the proposed method is verified by preliminary results from some computational examples. To demonstrate the effectiveness of the proposed technique for large-scale problems, we apply it to some experiments on machine learning, such as supervised feature selection, multi-variable logistic regressions and neural networks for classification

    Model Building and Optimization Analysis of MDF Continuous Hot-Pressing Process by Neural Network

    Get PDF
    We propose a one-layer neural network for solving a class of constrained optimization problems, which is brought forward from the MDF continuous hot-pressing process. The objective function of the optimization problem is the sum of a nonsmooth convex function and a smooth nonconvex pseudoconvex function, and the feasible set consists of two parts, one is a closed convex subset of Rn, and the other is defined by a class of smooth convex functions. By the theories of smoothing techniques, projection, penalty function, and regularization term, the proposed network is modeled by a differential equation, which can be implemented easily. Without any other condition, we prove the global existence of the solutions of the proposed neural network with any initial point in the closed convex subset. We show that any accumulation point of the solutions of the proposed neural network is not only a feasible point, but also an optimal solution of the considered optimization problem though the objective function is not convex. Numerical experiments on the MDF hot-pressing process including the model building and parameter optimization are tested based on the real data set, which indicate the good performance of the proposed neural network in applications

    A neurodynamic optimization approach to constrained pseudoconvex optimization.

    Get PDF
    Guo, Zhishan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 71-82).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement i --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1Chapter 1.2 --- Recurrent Neural Networks --- p.4Chapter 1.3 --- Thesis Organization --- p.7Chapter 2 --- Literature Review --- p.8Chapter 2.1 --- Pseudo convex Optimization --- p.8Chapter 2.2 --- Recurrent Neural Networks --- p.10Chapter 3 --- Model Description and Convergence Analysis --- p.17Chapter 3.1 --- Model Descriptions --- p.18Chapter 3.2 --- Global Convergence --- p.20Chapter 4 --- Numerical Examples --- p.27Chapter 4.1 --- Gaussian Optimization --- p.28Chapter 4.2 --- Quadratic Fractional Programming --- p.36Chapter 4.3 --- Nonlinear Convex Programming --- p.39Chapter 5 --- Real-time Data Reconciliation --- p.42Chapter 5.1 --- Introduction --- p.42Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44Chapter 5.3 --- Examples --- p.45Chapter 6 --- Real-time Portfolio Optimization --- p.53Chapter 6.1 --- Introduction --- p.53Chapter 6.2 --- Model Description --- p.54Chapter 6.3 --- Theoretical Analysis --- p.56Chapter 6.4 --- Illustrative Examples --- p.58Chapter 7 --- Conclusions and Future Works --- p.67Chapter 7.1 --- Concluding Remarks --- p.67Chapter 7.2 --- Future Works --- p.68Chapter A --- Publication List --- p.69Bibliography --- p.7

    Successive Convex Approximation Algorithms for Sparse Signal Estimation with Nonconvex Regularizations

    Full text link
    In this paper, we propose a successive convex approximation framework for sparse optimization where the nonsmooth regularization function in the objective function is nonconvex and it can be written as the difference of two convex functions. The proposed framework is based on a nontrivial combination of the majorization-minimization framework and the successive convex approximation framework proposed in literature for a convex regularization function. The proposed framework has several attractive features, namely, i) flexibility, as different choices of the approximate function lead to different type of algorithms; ii) fast convergence, as the problem structure can be better exploited by a proper choice of the approximate function and the stepsize is calculated by the line search; iii) low complexity, as the approximate function is convex and the line search scheme is carried out over a differentiable function; iv) guaranteed convergence to a stationary point. We demonstrate these features by two example applications in subspace learning, namely, the network anomaly detection problem and the sparse subspace clustering problem. Customizing the proposed framework by adopting the best-response type approximation, we obtain soft-thresholding with exact line search algorithms for which all elements of the unknown parameter are updated in parallel according to closed-form expressions. The attractive features of the proposed algorithms are illustrated numerically.Comment: submitted to IEEE Journal of Selected Topics in Signal Processing, special issue in Robust Subspace Learnin

    Recurrent neural networks with fixed time convergence for linear and quadratic programming

    Get PDF
    In this paper, a new class of recurrent neural networks which solve linear and quadratic programs are presented. Their design is considered as a sliding mode control problem, where the network structure is based on the Karush-Kuhn-Tucker (KKT) optimality conditions with the KKT multipliers considered as control inputs to be implemented with fixed time stabilizing terms, instead of common used activation functions. Thus, the main feature of the proposed network is its fixed convergence time to the solution. That means, there is time independent to the initial conditions in which the network converges to the optimization solution. Simulations show the feasibility of the current approach
    corecore