98 research outputs found

    Sequential Convex Programming Methods for Solving Nonlinear Optimization Problems with DC constraints

    Full text link
    This paper investigates the relation between sequential convex programming (SCP) as, e.g., defined in [24] and DC (difference of two convex functions) programming. We first present an SCP algorithm for solving nonlinear optimization problems with DC constraints and prove its convergence. Then we combine the proposed algorithm with a relaxation technique to handle inconsistent linearizations. Numerical tests are performed to investigate the behaviour of the class of algorithms.Comment: 18 pages, 1 figur

    A globally convergent difference-of-convex algorithmic framework and application to log-determinant optimization problems

    Full text link
    The difference-of-convex algorithm (DCA) is a conceptually simple method for the minimization of (possibly) nonconvex functions that are expressed as the difference of two convex functions. At each iteration, DCA constructs a global overestimator of the objective and solves the resulting convex subproblem. Despite its conceptual simplicity, the theoretical understanding and algorithmic framework of DCA needs further investigation. In this paper, global convergence of DCA at a linear rate is established under an extended Polyak--{\L}ojasiewicz condition. The proposed condition holds for a class of DC programs with a bounded, closed, and convex constraint set, for which global convergence of DCA cannot be covered by existing analyses. Moreover, the DCProx computational framework is proposed, in which the DCA subproblems are solved by a primal--dual proximal algorithm with Bregman distances. With a suitable choice of Bregman distances, DCProx has simple update rules with cheap per-iteration complexity. As an application, DCA is applied to several fundamental problems in network information theory, for which no existing numerical methods are able to compute the global optimum. For these problems, our analysis proves the global convergence of DCA, and more importantly, DCProx solves the DCA subproblems efficiently. Numerical experiments are conducted to verify the efficiency of DCProx

    A recursively feasible and convergent Sequential Convex Programming procedure to solve non-convex problems with linear equality constraints

    Get PDF
    A computationally efficient method to solve non-convex programming problems with linear equality constraints is presented. The proposed method is based on a recursively feasible and descending sequential convex programming procedure proven to converge to a locally optimal solution. Assuming that the first convex problem in the sequence is feasible, these properties are obtained by convexifying the non-convex cost and inequality constraints with inner-convex approximations. Additionally, a computationally efficient method is introduced to obtain inner-convex approximations based on Taylor series expansions. These Taylor-based inner-convex approximations provide the overall algorithm with a quadratic rate of convergence. The proposed method is capable of solving problems of practical interest in real-time. This is illustrated with a numerical simulation of an aerial vehicle trajectory optimization problem on commercial-of-the-shelf embedded computers

    Rank-Two Beamforming and Power Allocation in Multicasting Relay Networks

    Full text link
    In this paper, we propose a novel single-group multicasting relay beamforming scheme. We assume a source that transmits common messages via multiple amplify-and-forward relays to multiple destinations. To increase the number of degrees of freedom in the beamforming design, the relays process two received signals jointly and transmit the Alamouti space-time block code over two different beams. Furthermore, in contrast to the existing relay multicasting scheme of the literature, we take into account the direct links from the source to the destinations. We aim to maximize the lowest received quality-of-service by choosing the proper relay weights and the ideal distribution of the power resources in the network. To solve the corresponding optimization problem, we propose an iterative algorithm which solves sequences of convex approximations of the original non-convex optimization problem. Simulation results demonstrate significant performance improvements of the proposed methods as compared with the existing relay multicasting scheme of the literature and an algorithm based on the popular semidefinite relaxation technique

    Programmation DC et DCA pour l'optimisation non convexe/optimisation globale en variables mixtes entières (Codes et Applications)

    Get PDF
    Basés sur les outils théoriques et algorithmiques de la programmation DC et DCA, les travaux de recherche dans cette thèse portent sur les approches locales et globales pour l'optimisation non convexe et l'optimisation globale en variables mixtes entières. La thèse comporte 5 chapitres. Le premier chapitre présente les fondements de la programmation DC et DCA, et techniques de Séparation et Evaluation (B&B) (utilisant la technique de relaxation DC pour le calcul des bornes inférieures de la valeur optimale) pour l'optimisation globale. Y figure aussi des résultats concernant la pénalisation exacte pour la programmation en variables mixtes entières. Le deuxième chapitre est consacré au développement d'une méthode DCA pour la résolution d'une classe NP-difficile des programmes non convexes non linéaires en variables mixtes entières. Ces problèmes d'optimisation non convexe sont tout d'abord reformulées comme des programmes DC via les techniques de pénalisation en programmation DC de manière que les programmes DC résultants soient efficacement résolus par DCA et B&B bien adaptés. Comme première application en optimisation financière, nous avons modélisé le problème de gestion de portefeuille sous le coût de transaction concave et appliqué DCA et B&B à sa résolution. Dans le chapitre suivant nous étudions la modélisation du problème de minimisation du coût de transaction non convexe discontinu en gestion de portefeuille sous deux formes : la première est un programme DC obtenu en approximant la fonction objectif du problème original par une fonction DC polyèdrale et la deuxième est un programme DC mixte 0-1 équivalent. Et nous présentons DCA, B&B, et l'algorithme combiné DCA-B&B pour leur résolution. Le chapitre 4 étudie la résolution exacte du problème multi-objectif en variables mixtes binaires et présente deux applications concrètes de la méthode proposée. Nous nous intéressons dans le dernier chapitre à ces deux problématiques challenging : le problème de moindres carrés linéaires en variables entières bornées et celui de factorisation en matrices non négatives (Nonnegative Matrix Factorization (NMF)). La méthode NMF est particulièrement importante de par ses nombreuses et diverses applications tandis que les applications importantes du premier se trouvent en télécommunication. Les simulations numériques montrent la robustesse, rapidité (donc scalabilité), performance et la globalité de DCA par rapport aux méthodes existantes.Based on theoretical and algorithmic tools of DC programming and DCA, the research in this thesis focus on the local and global approaches for non convex optimization and global mixed integer optimization. The thesis consists of 5 chapters. The first chapter presents fundamentals of DC programming and DCA, and techniques of Branch and Bound method (B&B) for global optimization (using the DC relaxation technique for calculating lower bounds of the optimal value). It shall include results concerning the exact penalty technique in mixed integer programming. The second chapter is devoted of a DCA method for solving a class of NP-hard nonconvex nonlinear mixed integer programs. These nonconvex problems are firstly reformulated as DC programs via penalty techniques in DC programming so that the resulting DC programs are effectively solved by DCA and B&B well adapted. As a first application in financial optimization, we modeled the problem pf portfolio selection under concave transaction costs and applied DCA and B&B to its solutions. In the next chapter we study the modeling of the problem of minimization of nonconvex discontinuous transaction costs in portfolio selection in two forms: the first is a DC program obtained by approximating the objective function of the original problem by a DC polyhedral function and the second is an equivalent mixed 0-1 DC program. And we present DCA, B&B algorithm, and a combined DCA-B&B algorithm for their solutions. Chapter 4 studied the exact solution for the multi-objective mixed zero-one linear programming problem and presents two practical applications of proposed method. We are interested int the last chapter two challenging problems: the linear integer least squares problem and the Nonnegative Mattrix Factorization problem (NMF). The NMF method is particularly important because of its many various applications of the first are in telecommunications. The numerical simulations show the robustness, speed (thus scalability), performance, and the globality of DCA in comparison to existent methods.ROUEN-INSA Madrillet (765752301) / SudocSudocFranceF

    Local convergence of a sequential quadratic programming method for a class of nonsmooth nonconvex objectives

    Full text link
    A sequential quadratic programming (SQP) algorithm is designed for nonsmooth optimization problems with upper-C^2 objective functions. Upper-C^2 functions are locally equivalent to difference-of-convex (DC) functions with smooth convex parts. They arise naturally in many applications such as certain classes of solutions to parametric optimization problems, e.g., recourse of stochastic programming, and projection onto closed sets. The proposed algorithm conducts line search and adopts an exact penalty merit function. The potential inconsistency due to the linearization of constraints are addressed through relaxation, similar to that of Sl_1QP. We show that the algorithm is globally convergent under reasonable assumptions. Moreover, we study the local convergence behavior of the algorithm under additional assumptions of Kurdyka-{\L}ojasiewicz (KL) properties, which have been applied to many nonsmooth optimization problems. Due to the nonconvex nature of the problems, a special potential function is used to analyze local convergence. We show that under acceptable assumptions, upper bounds on local convergence can be proven. Additionally, we show that for a large number of optimization problems with upper-C^2 objectives, their corresponding potential functions are indeed KL functions. Numerical experiment is performed with a power grid optimization problem that is consistent with the assumptions and analysis in this paper

    Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference

    Full text link
    We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF inference problems. The core of our method is a very efficient bounding procedure, which combines scalable semidefinite programming (SDP) and a cutting-plane method for seeking violated constraints. In order to further speed up the computation, several strategies have been exploited, including model reduction, warm start and removal of inactive constraints. We analyze the performance of the proposed method under different settings, and demonstrate that our method either outperforms or performs on par with state-of-the-art approaches. Especially when the connectivities are dense or when the relative magnitudes of the unary costs are low, we achieve the best reported results. Experiments show that the proposed algorithm achieves better approximation than the state-of-the-art methods within a variety of time budgets on challenging non-submodular MAP-MRF inference problems.Comment: 21 page
    corecore