26,509 research outputs found

    Differential Evolution for Multiobjective Portfolio Optimization

    Get PDF
    Financial portfolio optimization is a challenging problem. First, the problem is multiobjective (i.e.: minimize risk and maximize profit) and the objective functions are often multimodal and non smooth (e.g.: value at risk). Second, managers have often to face real-world constraints, which are typically non-linear. Hence, conventional optimization techniques, such as quadratic programming, cannot be used. Stochastic search heuristic can be an attractive alternative. In this paper, we propose a new multiobjective algorithm for portfolio optimization: DEMPO - Differential Evolution for Multiobjective Portfolio Optimization. The main advantage of this new algorithm is its generality, i.e., the ability to tackle a portfolio optimization task as it is, without simplifications. Our empirical results show the capability of our approach of obtaining highly accurate results in very reasonable runtime, in comparison with quadratic programming and another state-of-art search heuristic, the so-called NSGA II.Portfolio Optimization, Multiobjective, Real-world Constraints, Value at Risk, Expected Shortfall, Differential Evolution

    On the use of biased-randomized algorithms for solving non-smooth optimization problems

    Get PDF
    Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines

    A similarity-based cooperative co-evolutionary algorithm for dynamic interval multi-objective optimization problems

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Dynamic interval multi-objective optimization problems (DI-MOPs) are very common in real-world applications. However, there are few evolutionary algorithms that are suitable for tackling DI-MOPs up to date. A framework of dynamic interval multi-objective cooperative co-evolutionary optimization based on the interval similarity is presented in this paper to handle DI-MOPs. In the framework, a strategy for decomposing decision variables is first proposed, through which all the decision variables are divided into two groups according to the interval similarity between each decision variable and interval parameters. Following that, two sub-populations are utilized to cooperatively optimize decision variables in the two groups. Furthermore, two response strategies, rgb0.00,0.00,0.00i.e., a strategy based on the change intensity and a random mutation strategy, are employed to rapidly track the changing Pareto front of the optimization problem. The proposed algorithm is applied to eight benchmark optimization instances rgb0.00,0.00,0.00as well as a multi-period portfolio selection problem and compared with five state-of-the-art evolutionary algorithms. The experimental results reveal that the proposed algorithm is very competitive on most optimization instances

    Review of Metaheuristics and Generalized Evolutionary Walk Algorithm

    Full text link
    Metaheuristic algorithms are often nature-inspired, and they are becoming very powerful in solving global optimization problems. More than a dozen of major metaheuristic algorithms have been developed over the last three decades, and there exist even more variants and hybrid of metaheuristics. This paper intends to provide an overview of nature-inspired metaheuristic algorithms, from a brief history to their applications. We try to analyze the main components of these algorithms and how and why they works. Then, we intend to provide a unified view of metaheuristics by proposing a generalized evolutionary walk algorithm (GEWA). Finally, we discuss some of the important open questions.Comment: 14 page
    • 

    corecore