765 research outputs found

    Optimal Orchestration of Virtual Network Functions

    Full text link
    -The emergence of Network Functions Virtualization (NFV) is bringing a set of novel algorithmic challenges in the operation of communication networks. NFV introduces volatility in the management of network functions, which can be dynamically orchestrated, i.e., placed, resized, etc. Virtual Network Functions (VNFs) can belong to VNF chains, where nodes in a chain can serve multiple demands coming from the network edges. In this paper, we formally define the VNF placement and routing (VNF-PR) problem, proposing a versatile linear programming formulation that is able to accommodate specific features and constraints of NFV infrastructures, and that is substantially different from existing virtual network embedding formulations in the state of the art. We also design a math-heuristic able to scale with multiple objectives and large instances. By extensive simulations, we draw conclusions on the trade-off achievable between classical traffic engineering (TE) and NFV infrastructure efficiency goals, evaluating both Internet access and Virtual Private Network (VPN) demands. We do also quantitatively compare the performance of our VNF-PR heuristic with the classical Virtual Network Embedding (VNE) approach proposed for NFV orchestration, showing the computational differences, and how our approach can provide a more stable and closer-to-optimum solution

    A hybrid approach to constrained global optimization

    Get PDF
    In this paper, we propose a novel hybrid global optimization method to solve constrained optimization problems. An exact penalty function is first applied to approximate the original constrained optimization problem by a sequence of optimization problems with bound constraints. To solve each of these box constrained optimization problems, two hybrid methods are introduced, where two different strategies are used to combine limited memory BFGS (L-BFGS) with Greedy Diffusion Search (GDS). The convergence issue of the two hybrid methods is addressed. To evaluate the effectiveness of the proposed algorithm, 18 box constrained and 4 general constrained problems from the literature are tested. Numerical results obtained show that our proposed hybrid algorithm is more effective in obtaining more accurate solutions than those compared to

    Arbitrarily tight aBB underestimators of general non-linear functions over sub-optimal domains

    Get PDF
    In this paper we explore the construction of arbitrarily tight αBB relaxations of C2 general non-linear non-convex functions. We illustrate the theoretical challenges of building such relaxations by deriving conditions under which it is possible for an αBB underestimator to provide exact bounds. We subsequently propose a methodology to build αBB underestimators which may be arbitrarily tight (i.e., the maximum separation distance between the original function and its underestimator is arbitrarily close to 0) in some domains that do not include the global solution (defined in the text as “sub-optimal”), assuming exact eigenvalue calculations are possible. This is achieved using a transformation of the original function into a μ-subenergy function and the derivation of αBB underestimators for the new function. We prove that this transformation results in a number of desirable bounding properties in certain domains. These theoretical results are validated in computational test cases where approximations of the tightest possible μ-subenergy underestimators, derived using sampling, are compared to similarly derived approximations of the tightest possible classical αBB underestimators. Our tests show that μ-subenergy underestimators produce much tighter bounds, and succeed in fathoming nodes which are impossible to fathom using classical αBB

    Global Optimization by Differential Evolution and Particle Swarm Methods: Evaluation on Some Benchmark Functions

    Get PDF
    In this paper we compare the performance of the Differential Evolution (DE) and the Repulsive Particle Swarm (RPS) methods of global optimization. To this end, seventy test functions have been chosen. Among these test functions, some are new while others are well known in the literature; some are unimodal, the others multi-modal; some are small in dimension (no. of variables, x in f(x)), while the others are large in dimension; some are algebraic polynomial equations, while the other are transcendental, etc. FORTRAN programs of DE and RPS have been appended. Among 70 functions, a few have been run for small as well as large dimensions. In total, 73 optimization exercises have been done. DE has succeeded in 63 cases while RPS has succeeded in 55 cases. In almost all cases, DE has converged faster and given much more accurate results. The convergence of RPS is much slower even for lesser stringency on accuracy. Some test functions have been hard for both the methods. These are: Zero-Sum (30D), Perm#1, Perm#2, Power and Bukin functions, Weierstrass, and Michalewicz functions. From what we find, one cannot reach at the definite conclusion that the DE performs better or worse than the RPS. None could assure a supremacy over the other. Each one faltered in some cases; each one succeeded in some others. However, DE is unquestionably faster, more accurate and more frequently successful than the RPS. It may be argued, nevertheless, that alternative choice of adjustable parameters could have yielded better results in either method’s case. The protagonists of either method could suggest that. Our purpose is not to join with the one or the other. We simply want to highlight that in certain cases they both succeed, in certain other case they both fail and each one has some selective preference over some particular type of surfaces. What is needed is to identify such structures and surfaces that suit a particular method most. It is needed that we find out some criteria to classify the problems that suit (or does not suit) a particular method. This classification will highlight the comparative advantages of using a particular method for dealing with a particular class of problems.: Global optimization; Stochastic search; Repulsive particle swarm; Differential Evolution; Clustering algorithm; Simulated annealing; Genetic algorithm; Tabu search; Ant Colony algorithm; Monte Carlo method; Box algorithm; Nelder-Mead; Nonlinear programming; FORTRAN computer program; local optima; Benchmark; test functions

    A Multi-Layer Line Search Method to Improve the Initialization of Optimization Algorithms

    Get PDF
    International audienceWe introduce a novel metaheuristic methodology to improve the initializationof a given deterministic or stochastic optimization algorithm. Our objectiveis to improve the performance of the considered algorithm, calledcore optimization algorithm, by reducing its number of cost function evaluations,by increasing its success rate and by boosting the precision of itsresults. In our approach, the core optimization is considered as a suboptimizationproblem for a multi-layer line search method. The approachis presented and implemented for various particular core optimization algorithms:Steepest Descent, Heavy-Ball, Genetic Algorithm, Differential Evolutionand Controlled Random Search. We validate our methodology byconsidering a set of low and high dimensional benchmark problems (i.e.,problems of dimension between 2 and 1000). The results are compared tothose obtained with the core optimization algorithms alone and with twoadditional global optimization methods (Direct Tabu Search and ContinuousGreedy Randomized Adaptive Search). These latter also aim at improvingthe initial condition for the core algorithms. The numerical results seemto indicate that our approach improves the performances of the core optimizationalgorithms and allows to generate algorithms more efficient thanthe other optimization methods studied here. A Matlab optimization packagecalled ”Global Optimization Platform” (GOP), implementing the algorithmspresented here, has been developed and can be downloaded at:http://www.mat.ucm.es/momat/software.ht

    Performance of Differential Evolution and Particle Swarm Methods on Some Relatively Harder Multi-modal Benchmark Functions

    Get PDF
    This paper aims at comparing the performance of the Differential Evolution (DE) and the Repulsive Particle Swarm (RPS) methods of global optimization. To this end, some relatively difficult test functions have been chosen. Among these test functions, some are new while others are well known in the literature. We use DE method with the exponential crossover scheme as well as with no crossover (only probabilistic replacement). Our findings suggest that DE (with the exponential crossover scheme) mostly fails to find the optimum in case of the functions under study. Of course, it succeeds in case of some functions (perm#2, zero-sum) for very small dimension, but begins to falter as soon as the dimension is increased. In case of DCS function, it works well up to dimension = 5. When we use no crossover (only probabilistic replacement) we obtain better results in case of several of the functions under study. In case of Perm#1, Perm#2, Zero-sum, Kowalik, Hougen and Power-sum functions, a remarkable advantage is there. Whether crossover or no crossover, DE falters when the optimand function has some element of randomness. This is indicated by the functions: Yao-Liu#7, Fletcher-Powell, and “New function#2”. DE has no problems in optimizing the “New function #1”. But the “New function #2” proves to be a hard nut. However, RPS performs much better for such stochastic functions. When the Fletcher-Powell function is optimized with non-stochastic c vector, DE works fine. But as soon as c is stochastic, it becomes unstable. Thus, it may be observed that an introduction of stochasticity into the decision variables (or simply added to the function as in Yao-Liu#7) interferes with the fundamentals of DE, which works through attainment of better and better (in the sense of Pareto improvement) population at each successive iteration. The paper concludes: (1) for different types of problems, different schemes of crossover (including none) may be suitable or unsuitable, (2) Stochasticity entering into the optimand function may make DE unstable, but RPS may function well.Differential Evolution; Repulsive Particle Swarm; Global optimization; non-convex functions; Fortran; computer program; benchmark; test; Stochastic functions; Fletcher-Powell; Kowalik; Hougen; Power-sum; Perm; Zero-sum; New functions; Bukin function
    corecore