5 research outputs found

    Nonsmooth and derivative-free optimization based hybrid methods and applications

    Get PDF
    "In this thesis, we develop hybrid methods for solving global and in particular, nonsmooth optimization problems. Hybrid methods are becoming more popular in global optimization since they allow to apply powerful smooth optimization techniques to solve global optimization problems. Such methods are able to efficiently solve global optimization problems with large number of variables. To date global search algorithms have been mainly applied to improve global search properties of the local search methods (including smooth optimization algorithms). In this thesis we apply rather different strategy to design hybrid methods. We use local search algorithms to improve the efficiency of global search methods. The thesis consists of two parts. In the first part we describe hybrid algorithms and in the second part we consider their various applications." -- taken from Abstract.Operational Research and Cybernetic

    Hyperbolic smoothing in nonsmooth optimization and applications

    Get PDF
    Nonsmooth nonconvex optimization problems arise in many applications including economics, business and data mining. In these applications objective functions are not necessarily differentiable or convex. Many algorithms have been proposed over the past three decades to solve such problems. In spite of the significant growth in this field, the development of efficient algorithms for solving this kind of problem is still a challenging task. The subgradient method is one of the simplest methods developed for solving these problems. Its convergence was proved only for convex objective functions. This method does not involve any subproblems, neither for finding search directions nor for computation of step lengths, which are fixed ahead of time. Bundle methods and their various modifications are among the most efficient methods for solving nonsmooth optimization problems. These methods involve a quadratic programming subproblem to find search directions. The size of the subproblem may increase significantly with the number of variables, which makes the bundle-type methods unsuitable for large scale nonsmooth optimization problems. The implementation of bundle-type methods, which require the use of the quadratic programming solvers, is not as easy as the implementation of the subgradient methods. Therefore it is beneficial to develop algorithms for nonsmooth nonconvex optimization which are easy to implement and more efficient than the subgradient methods. In this thesis, we develop two new algorithms for solving nonsmooth nonconvex optimization problems based on the use of the hyperbolic smoothing technique and apply them to solve the pumping cost minimization problem in water distribution. Both algorithms use smoothing techniques. The first algorithm is designed for solving finite minimax problems. In order to apply the hyperbolic smoothing we reformulate the objective function in the minimax problem and study the relationship between the original minimax and reformulated problems. We also study the main properties of the hyperbolic smoothing function. Based on these results an algorithm for solving the finite minimax problem is proposed and this algorithm is implemented in GAMS. We present preliminary results of numerical experiments with well-known nonsmooth optimization test problems. We also compare the proposed algorithm with the algorithm that uses the exponential smoothing function as well as with the algorithm based on nonlinear programming reformulation of the finite minimax problem. The second nonsmooth optimization algorithm we developed was used to demonstrate how smooth optimization methods can be applied to solve general nonsmooth (nonconvex) optimization problems. In order to do so we compute subgradients from some neighborhood of the current point and define a system of linear inequalities using these subgradients. Search directions are computed by solving this system. This system is solved by reducing it to the minimization of the convex piecewise linear function over the unit ball. Then the hyperbolic smoothing function is applied to approximate this minimization problem by a sequence of smooth problems which are solved by smooth optimization methods. Such an approach allows one to apply powerful smooth optimization algorithms for solving nonsmooth optimization problems and extend smoothing techniques for solving general nonsmooth nonconvex optimization problems. The convergence of the algorithm based on this approach is studied. The proposed algorithm was implemented in Fortran 95. Preliminary results of numerical experiments are reported and the proposed algorithm is compared with an other five nonsmooth optimization algorithms. We also implement the algorithm in GAMS and compare it with GAMS solvers using results of numerical experiments.Doctor of Philosoph

    Learning Bayesian networks based on optimization approaches

    Get PDF
    Learning accurate classifiers from preclassified data is a very active research topic in machine learning and artifcial intelligence. There are numerous classifier paradigms, among which Bayesian Networks are very effective and well known in domains with uncertainty. Bayesian Networks are widely used representation frameworks for reasoning with probabilistic information. These models use graphs to capture dependence and independence relationships between feature variables, allowing a concise representation of the knowledge as well as efficient graph based query processing algorithms. This representation is defined by two components: structure learning and parameter learning. The structure of this model represents a directed acyclic graph. The nodes in the graph correspond to the feature variables in the domain, and the arcs (edges) show the causal relationships between feature variables. A directed edge relates the variables so that the variable corresponding to the terminal node (child) will be conditioned on the variable corresponding to the initial node (parent). The parameter learning represents probabilities and conditional probabilities based on prior information or past experience. The set of probabilities are represented in the conditional probability table. Once the network structure is constructed, the probabilistic inferences are readily calculated, and can be performed to predict the outcome of some variables based on the observations of others. However, the problem of structure learning is a complex problem since the number of candidate structures grows exponentially when the number of feature variables increases. This thesis is devoted to the development of learning structures and parameters in Bayesian Networks. Different models based on optimization techniques are introduced to construct an optimal structure of a Bayesian Network. These models also consider the improvement of the Naive Bayes' structure by developing new algorithms to alleviate the independence assumptions. We present various models to learn parameters of Bayesian Networks; in particular we propose optimization models for the Naive Bayes and the Tree Augmented Naive Bayes by considering different objective functions. To solve corresponding optimization problems in Bayesian Networks, we develop new optimization algorithms. Local optimization methods are introduced based on the combination of the gradient and Newton methods. It is proved that the proposed methods are globally convergent and have superlinear convergence rates. As a global search we use the global optimization method, AGOP, implemented in the open software library GANSO. We apply the proposed local methods in the combination with AGOP. Therefore, the main contributions of this thesis include (a) new algorithms for learning an optimal structure of a Bayesian Network; (b) new models for learning the parameters of Bayesian Networks with the given structures; and finally (c) new optimization algorithms for optimizing the proposed models in (a) and (b). To validate the proposed methods, we conduct experiments across a number of real world problems. Print version is available at: http://library.federation.edu.au/record=b1804607~S4Doctor of Philosoph

    A feasible second order bundle algorithm for nonsmooth, nonconvex optimization problems with inequality constraints and its application to certificates of infeasibility

    Get PDF
    Diese Arbeit erweitert den SQP-Zugang des Bundle-Newton-Verfahrens fĂŒr nichtglatte, unrestringierte Optimierungsprobleme zu einem zulĂ€ssigen Bundle-Algorithmus zweiter Ordnung fĂŒr nichtglatte, nichtkonvexe Optimierungsprobleme mit Ungleichungsnebenbedingungen. An Stelle der Verwendung einer Straffunktion oder eines Filters oder einer Improvement-Funktion zur Behandlung der Nebenbedingungen, wird die Suchrichtung durch Lösen eines konvexen quadratischen Optimierungsproblems mit quadratischen Nebenbedingungen bestimmt, um gute Iterationspunkte zu erhalten. Außerdem untersuchen wir einige Varianten des Suchrichtungsproblems, wir geben eine numerische Rechtfertigung fĂŒr die Anwendbarkeit des vorgestellten Zugangs, indem wir die EffektivitĂ€t von verschiedener Lösungssoftware fĂŒr die Berechnung der Suchrichtung vergleichen, und wir weisen die globale Konvergenz der Methode unter bestimmten Voraussetzungen nach. Weiters stellen wir eine wichtige Anwendung der nichtglatten Optimierung fĂŒr ZulĂ€ssigkeitsprobleme vor: Dazu fĂŒhren wir ein UnzulĂ€ssigkeitszertifikat ein, welches das Auffinden von Ausschlussboxen durch Lösen eines nichtglatten Optimierungsproblems mit linearen Nebenbedingungen ermöglicht. ZusĂ€tzlich kann dieses Zertifikat verwendet werden, um eine Ausschlussbox durch Lösen eines nichtglatten Optimierungsproblems mit nichtlinearen Nebenbedingungen zu vergrĂ¶ĂŸern. Schließlich besprechen wir noch die im Vergleich zu anderer Lösungssoftware guten Testergebnisse von unserem Bundle-Algorithmus zweiter Ordnung fĂŒr einige Hock-Schittkowski-Beispiele, fĂŒr Beispiele die im Zusammenhang mit der Auffindung von Ausschlussboxen in ZulĂ€ssigkeitsproblemen auftreten und fĂŒr höher dimensionale stĂŒckweise quadratische Beispiele.This thesis extends the SQP-approach of the well-known bundle-Newton method for nonsmooth unconstrained minimization to a feasible second order bundle algorithm for nonsmooth, nonconvex optimization problems with inequality constraints: Instead of using a penalty function or a filter or an improvement function to deal with the presence of constraints, the search direction is determined by solving a convex quadratically constrained quadratic program to obtain good iteration points. Moreover, we investigate certain versions of the search direction problem, we justify the applicability of this approach numerically by using different solvers for the computation of the search direction and we show global convergence of the method under certain assumptions. Furthermore, we present an important application of nonsmooth optimization to constraint satisfaction problems: We introduce a certificate of infeasibility for finding exclusion boxes by solving a linearly constrained nonsmooth optimization problem. Additionally, the constructed certificate can be used to enlarge an exclusion box by solving a nonlinearly constrained nonsmooth optimization problem. Finally, the good performance of the second order bundle algorithm is demonstrated by comparison with test results of other solvers on examples of the Hock-Schittkowski collection, on custom examples that arise in the context of finding exclusion boxes for constraint satisfaction problems, and on higher dimensional piecewise quadratic examples

    A quasisecant method for solving a system of nonsmooth equations

    No full text
    In this paper, the solution of nonsmooth equations is studied. We first transform theproblem into an equivalent nonsmooth optimization problem and then the quasisecantmethod is introduced to solve it. Some nonsmooth equations that have arisen from bilevelprogramming problems are solved by our proposed method. The numerical results showthe effectiveness and efficiency of our proposed method
    corecore