1,734 research outputs found

    Guaranteeing highly robust weakly efficient solutions for uncertain multi-objective convex programs

    Get PDF
    This paper deals with uncertain multi-objective convex programming problems, where the data of the objective function or the constraints or both are allowed to be uncertain within specified uncertainty sets. We present sufficient conditions for the existence of highly robust weakly efficient solutions, that is, robust feasible solutions which are weakly efficient for any possible instance of the objective function within a specified uncertainty set. This is done by way of estimating the radius of highly robust weak efficiency under linearly distributed uncertainty of the objective functions. In the particular case of robust quadratic multi-objective programs, we show that these sufficient conditions can be expressed in terms of the original data of the problem, extending and improving the corresponding results in the literature for robust multi-objective linear programs under ball uncertainty.This research was partially supported by the Australian Research Council, Discovery Project DP120100467 and the MINECO of Spain and ERDF of EU, Grants MTM2014-59179-C2-1-P and ECO2016-77200-P

    Notes on the value function approach to multiobjective bilevel optimization

    Full text link
    This paper is concerned with the value function approach to multiobjective bilevel optimization which exploits a lower level frontier-type mapping in order to replace the hierarchical model of two interdependent multiobjective optimization problems by a single-level multiobjective optimization problem. As a starting point, different value-function-type reformulations are suggested and their relations are discussed. Here, we focus on the situations where the lower level problem is solved up to efficiency or weak efficiency, and an intermediate solution concept is suggested as well. We study the graph-closedness of the associated efficiency-type and frontier-type mappings. These findings are then used for two purposes. First, we investigate existence results in multiobjective bilevel optimization. Second, for the derivation of necessary optimality conditions via the value function approach, it is inherent to differentiate frontier-type mappings in a generalized way. Here, we are concerned with the computation of upper coderivative estimates for the frontier-type mapping associated with the setting where the lower level problem is solved up to weak efficiency. We proceed in two ways, relying, on the one hand, on a weak domination property and, on the other hand, on a scalarization approach. Throughout the paper, illustrative examples visualize our findings, the necessity of crucial assumptions, and some flaws in the related literature.Comment: 30 page

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Optimization in Industrial Systems

    Get PDF

    Stochastic measures of financial markets efficiency and integration

    Get PDF
    The notion of integration of different fmancial markets is often related to the absence of crossmarket arbitrage opportunities. Under the appropriated asswnptions and in absence of cross-market arbitrage opportunities, a riskneutral probability measure, shared by both markets, must exist. Some authors have considered this to provide some integration measures when the markets do not share any pricing rule, but always in static (or one period) asset pricing models. The purpose or this paper is to extend the refereed notions to a more general context. This is accomplished by introducing a methodology which may be applied in any intertemporal dynamic asset pricing model and without special asswnptions on the assets prices stochastic process. Then, the integration measures introduced here are stochastic processes testing different relative arbitrage profits and depending on the state of nature and on the date. The measures are introduced in a single fmancial market. When this market is not a global market from different ones, the measures simply test the degree of market efficiency. Transaction costs can be discounted in our model. Therefore, one can measure efficiency and integration in models with frictions. The main results are also interesting form a mathematical pint of view, since some topics of Operational Research are involved. We provide a procedure to solve a vector optimization problem with a non differentiable objective function and prove some properties about its sensitivity
    corecore