262,597 research outputs found

    Guest Editorial: Nonlinear Optimization of Communication Systems

    Get PDF
    Linear programming and other classical optimization techniques have found important applications in communication systems for many decades. Recently, there has been a surge in research activities that utilize the latest developments in nonlinear optimization to tackle a much wider scope of work in the analysis and design of communication systems. These activities involve every “layer” of the protocol stack and the principles of layered network architecture itself, and have made intellectual and practical impacts significantly beyond the established frameworks of optimization of communication systems in the early 1990s. These recent results are driven by new demands in the areas of communications and networking, as well as new tools emerging from optimization theory. Such tools include the powerful theories and highly efficient computational algorithms for nonlinear convex optimization, together with global solution methods and relaxation techniques for nonconvex optimization

    Global Deterministic Optimization with Artificial Neural Networks Embedded

    Full text link
    Artificial neural networks (ANNs) are used in various applications for data-driven black-box modeling and subsequent optimization. Herein, we present an efficient method for deterministic global optimization of ANN embedded optimization problems. The proposed method is based on relaxations of algorithms using McCormick relaxations in a reduced-space [\textit{SIOPT}, 20 (2009), pp. 573-601] including the convex and concave envelopes of the nonlinear activation function of ANNs. The optimization problem is solved using our in-house global deterministic solver MAiNGO. The performance of the proposed method is shown in four optimization examples: an illustrative function, a fermentation process, a compressor plant and a chemical process optimization. The results show that computational solution time is favorable compared to the global general-purpose optimization solver BARON.Comment: J Optim Theory Appl (2018

    Compute Faster and Learn Better: Model-based Nonconvex Optimization for Machine Learning

    Get PDF
    Nonconvex optimization naturally arises in many machine learning problems. Machine learning researchers exploit various nonconvex formulations to gain modeling flexibility, estimation robustness, adaptivity, and computational scalability. Although classical computational complexity theory has shown that solving nonconvex optimization is generally NP-hard in the worst case, practitioners have proposed numerous heuristic optimization algorithms, which achieve outstanding empirical performance in real-world applications. To bridge this gap between practice and theory, we propose a new generation of model-based optimization algorithms and theory, which incorporate the statistical thinking into modern optimization. Particularly, when designing practical computational algorithms, we take the underlying statistical models into consideration. Our novel algorithms exploit hidden geometric structures behind many nonconvex optimization problems, and can obtain global optima with the desired statistics properties in polynomial time with high probability

    Simulation-based Methods for Stochastic Control and Global Optimization

    Get PDF
    Ideas of stochastic control have found applications in a variety of areas. A subclass of the problems with parameterized policies (including some stochastic impulse control problems) has received significant attention recently because of emerging applications in the areas of engineering, management, and mathematical finance. However, explicit solutions for this type of stochastic control problems only exist for some special cases, and effective numerical methods are relatively rare. Deriving efficient stochastic derivative estimators for payoff functions with discontinuities arising in many problems of practical interest is very challenging. Global optimization problems are extremely hard to solve due to the typical multimodal properties of objective functions. With the increasing availability of computing power and memory, there is a rapid development in the merging of simulation and optimization techniques. Developing new and efficient simulation-based optimization algorithms for solving stochastic control and global optimization problems is the primary goal of this thesis. First we develop a new simulation-based optimization algorithm to solve a stochastic control problem with a parameterized policy that arises in the setting of dynamic pricing and inventory control. We consider a joint dynamic pricing and inventory control problem with continuous stochastic demand and model the problem as a stochastic control problem. An explicit solution is given when a special demand model is considered. For general demand models with a parameterized policy, we develop a new simulation-based method to solve this stochastic control problem. We prove the convergence of the algorithm and show the effectiveness of the algorithm by numerical experiments. In the second part of this thesis, we focus on the problem of estimating the derivatives for a class of discontinuous payoff functions, for which existing methods are either not valid or not efficient. We derive a new unbiased stochastic derivative estimator for performance functions containing indicator functions. One important feature of this new estimator is that it can be computed from a single sample path or simulation, whereas existing estimators in the literature require additional simulations. Finally we propose a new framework for solving global optimization problems by establishing a connection with evolutionary games, and show that a particular equilibrium set of the evolutionary game is asymptotically stable. Based on this connection, we propose a Model-based Evolutionary Optimization (MEO) algorithm, which uses probabilistic models to generate new candidate solutions and uses dynamics from evolutionary game theory to govern the evolution of the probabilistic models. MEO gives new insight into the mechanism of model updating in model-based global optimization algorithms from the perspective of evolutionary game theory. Furthermore, it opens the door to developing new algorithms by using various learning algorithms and analysis techniques from evolutionary game theory
    • …
    corecore