25 research outputs found

    A neurodynamic approach for a class of pseudoconvex semivectorial bilevel optimization problem

    Full text link
    The article proposes an exact approach to find the global solution of a nonconvex semivectorial bilevel optimization problem, where the objective functions at each level are pseudoconvex, and the constraints are quasiconvex. Due to its non-convexity, this problem is challenging, but it attracts more and more interest because of its practical applications. The algorithm is developed based on monotonic optimization combined with a recent neurodynamic approach, where the solution set of the lower-level problem is inner approximated by copolyblocks in outcome space. From that, the upper-level problem is solved using the branch-and-bound method. Finding the bounds is converted to pseudoconvex programming problems, which are solved using the neurodynamic method. The algorithm's convergence is proved, and computational experiments are implemented to demonstrate the accuracy of the proposed approach

    Model Building and Optimization Analysis of MDF Continuous Hot-Pressing Process by Neural Network

    Get PDF
    We propose a one-layer neural network for solving a class of constrained optimization problems, which is brought forward from the MDF continuous hot-pressing process. The objective function of the optimization problem is the sum of a nonsmooth convex function and a smooth nonconvex pseudoconvex function, and the feasible set consists of two parts, one is a closed convex subset of Rn, and the other is defined by a class of smooth convex functions. By the theories of smoothing techniques, projection, penalty function, and regularization term, the proposed network is modeled by a differential equation, which can be implemented easily. Without any other condition, we prove the global existence of the solutions of the proposed neural network with any initial point in the closed convex subset. We show that any accumulation point of the solutions of the proposed neural network is not only a feasible point, but also an optimal solution of the considered optimization problem though the objective function is not convex. Numerical experiments on the MDF hot-pressing process including the model building and parameter optimization are tested based on the real data set, which indicate the good performance of the proposed neural network in applications

    A neurodynamic optimization approach to constrained pseudoconvex optimization.

    Get PDF
    Guo, Zhishan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 71-82).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement i --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1Chapter 1.2 --- Recurrent Neural Networks --- p.4Chapter 1.3 --- Thesis Organization --- p.7Chapter 2 --- Literature Review --- p.8Chapter 2.1 --- Pseudo convex Optimization --- p.8Chapter 2.2 --- Recurrent Neural Networks --- p.10Chapter 3 --- Model Description and Convergence Analysis --- p.17Chapter 3.1 --- Model Descriptions --- p.18Chapter 3.2 --- Global Convergence --- p.20Chapter 4 --- Numerical Examples --- p.27Chapter 4.1 --- Gaussian Optimization --- p.28Chapter 4.2 --- Quadratic Fractional Programming --- p.36Chapter 4.3 --- Nonlinear Convex Programming --- p.39Chapter 5 --- Real-time Data Reconciliation --- p.42Chapter 5.1 --- Introduction --- p.42Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44Chapter 5.3 --- Examples --- p.45Chapter 6 --- Real-time Portfolio Optimization --- p.53Chapter 6.1 --- Introduction --- p.53Chapter 6.2 --- Model Description --- p.54Chapter 6.3 --- Theoretical Analysis --- p.56Chapter 6.4 --- Illustrative Examples --- p.58Chapter 7 --- Conclusions and Future Works --- p.67Chapter 7.1 --- Concluding Remarks --- p.67Chapter 7.2 --- Future Works --- p.68Chapter A --- Publication List --- p.69Bibliography --- p.7

    On multiobjective optimization from the nonsmooth perspective

    Get PDF
    Practical applications usually have multiobjective nature rather than having only one objective to optimize. A multiobjective problem cannot be solved with a single-objective solver as such. On the other hand, optimization of only one objective may lead to an arbitrary bad solutions with respect to other objectives. Therefore, special techniques for multiobjective optimization are vital. In addition to multiobjective nature, many real-life problems have nonsmooth (i.e. not continuously differentiable) structure. Unfortunately, many smooth (i.e. continuously differentiable) methods adopt gradient-based information which cannot be used for nonsmooth problems. Since both of these characteristics are relevant for applications, we focus here on nonsmooth multiobjective optimization. As a research topic, nonsmooth multiobjective optimization has gained only limited attraction while the fields of nonsmooth single-objective and smooth multiobjective optimization distinctively have attained greater interest. This dissertation covers parts of nonsmooth multiobjective optimization in terms of theory, methodology and application. Bundle methods are widely considered as effective and reliable solvers for single-objective nonsmooth optimization. Therefore, we investigate the use of the bundle idea in the multiobjective framework with three different methods. The first one generalizes the single-objective proximal bundle method for the nonconvex multiobjective constrained problem. The second method adopts the ideas from the classical steepest descent method into the convex unconstrained multiobjective case. The third method is designed for multiobjective problems with constraints where both the objectives and constraints can be represented as a difference of convex (DC) functions. Beside the bundle idea, all three methods are descent, meaning that they produce better values for each objective at each iteration. Furthermore, all of them utilize the improvement function either directly or indirectly. A notable fact is that none of these methods use scalarization in the traditional sense. With the scalarization we refer to the techniques transforming a multiobjective problem into the single-objective one. As the scalarization plays an important role in multiobjective optimization, we present one special family of achievement scalarizing functions as a representative of this category. In general, the achievement scalarizing functions suit well in the interactive framework. Thus, we propose the interactive method using our special family of achievement scalarizing functions. In addition, this method utilizes the above mentioned descent methods as tools to illustrate the range of optimal solutions. Finally, this interactive method is used to solve the practical case studies of the scheduling the final disposal of the spent nuclear fuel in Finland.Käytännön optimointisovellukset ovat usein luonteeltaan ennemmin moni- kuin yksitavoitteisia. Erityisesti monitavoitteisille tehtäville suunnitellut menetelmät ovat tarpeen, sillä monitavoitteista optimointitehtävää ei sellaisenaan pysty ratkaisemaan yksitavoitteisilla menetelmillä eikä vain yhden tavoitteen optimointi välttämättä tuota mielekästä ratkaisua muiden tavoitteiden suhteen. Monitavoitteisuuden lisäksi useat käytännön tehtävät ovat myös epäsileitä siten, etteivät niissä esiintyvät kohde- ja rajoitefunktiot välttämättä ole kaikkialla jatkuvasti differentioituvia. Kuitenkin monet optimointimenetelmät hyödyntävät gradienttiin pohjautuvaa tietoa, jota ei epäsileille funktioille ole saatavissa. Näiden molempien ominaisuuksien ollessa keskeisiä sovelluksia ajatellen, keskitytään tässä työssä epäsileään monitavoiteoptimointiin. Tutkimusalana epäsileä monitavoiteoptimointi on saanut vain vähän huomiota osakseen, vaikka sekä sileä monitavoiteoptimointi että yksitavoitteinen epäsileä optimointi erikseen ovat aktiivisia tutkimusaloja. Tässä työssä epäsileää monitavoiteoptimointia on käsitelty niin teorian, menetelmien kuin käytännön sovelluksien kannalta. Kimppumenetelmiä pidetään yleisesti tehokkaina ja luotettavina menetelminä epäsileän optimointitehtävän ratkaisemiseen ja siksi tätä ajatusta hyödynnetään myös tässä väitöskirjassa kolmessa eri menetelmässä. Ensimmäinen näistä yleistää yksitavoitteisen proksimaalisen kimppumenetelmän epäkonveksille monitavoitteiselle rajoitteiselle tehtävälle sopivaksi. Toinen menetelmä hyödyntää klassisen nopeimman laskeutumisen menetelmän ideaa konveksille rajoitteettomalle tehtävälle. Kolmas menetelmä on suunniteltu erityisesti monitavoitteisille rajoitteisille tehtäville, joiden kohde- ja rajoitefunktiot voidaan ilmaista kahden konveksin funktion erotuksena. Kimppuajatuksen lisäksi kaikki kolme menetelmää ovat laskevia eli ne tuottavat joka kierroksella paremman arvon jokaiselle tavoitteelle. Yhteistä on myös se, että nämä kaikki hyödyntävät parannusfunktiota joko suoraan sellaisenaan tai epäsuorasti. Huomattavaa on, ettei yksikään näistä menetelmistä hyödynnä skalarisointia perinteisessä merkityksessään. Skalarisoinnilla viitataan menetelmiin, joissa usean tavoitteen tehtävä on muutettu sopivaksi yksitavoitteiseksi tehtäväksi. Monitavoiteoptimointimenetelmien joukossa skalarisoinnilla on vankka jalansija. Esimerkkinä skalarisoinnista tässä työssä esitellään yksi saavuttavien skalarisointifunktioiden perhe. Yleisesti saavuttavat skalarisointifunktiot soveltuvat hyvin interaktiivisten menetelmien rakennuspalikoiksi. Täten kuvaillaan myös esiteltyä skalarisointifunktioiden perhettä hyödyntävä interaktiivinen menetelmä, joka lisäksi hyödyntää laskevia menetelmiä optimaalisten ratkaisujen havainnollistamisen apuna. Lopuksi tätä interaktiivista menetelmää käytetään aikatauluttamaan käytetyn ydinpolttoaineen loppusijoitusta Suomessa

    A Framework for Controllable Pareto Front Learning with Completed Scalarization Functions and its Applications

    Full text link
    Pareto Front Learning (PFL) was recently introduced as an efficient method for approximating the entire Pareto front, the set of all optimal solutions to a Multi-Objective Optimization (MOO) problem. In the previous work, the mapping between a preference vector and a Pareto optimal solution is still ambiguous, rendering its results. This study demonstrates the convergence and completion aspects of solving MOO with pseudoconvex scalarization functions and combines them into Hypernetwork in order to offer a comprehensive framework for PFL, called Controllable Pareto Front Learning. Extensive experiments demonstrate that our approach is highly accurate and significantly less computationally expensive than prior methods in term of inference time.Comment: Under Review at Neural Networks Journa

    Recurrent neural networks with fixed time convergence for linear and quadratic programming

    Get PDF
    In this paper, a new class of recurrent neural networks which solve linear and quadratic programs are presented. Their design is considered as a sliding mode control problem, where the network structure is based on the Karush-Kuhn-Tucker (KKT) optimality conditions with the KKT multipliers considered as control inputs to be implemented with fixed time stabilizing terms, instead of common used activation functions. Thus, the main feature of the proposed network is its fixed convergence time to the solution. That means, there is time independent to the initial conditions in which the network converges to the optimization solution. Simulations show the feasibility of the current approach
    corecore