78 research outputs found

    The determination of the least distance to the strongly efficient frontier in Data Envelopment Analysis oriented models: modelling and computational aspects

    Get PDF
    Determining the least distance to the efficient frontier for estimating technical inefficiency, with the consequent determination of closest targets, has been one of the relevant issues in recent Data Envelopment Analysis literature. This new paradigm contrasts with traditional approaches, which yield furthest targets. In this respect, some techniques have been proposed in order to implement the new paradigm. A group of these techniques is based on identifying all the efficient faces of the polyhedral production possibility set and, therefore, is associated with the resolution of a NP-hard problem. In contrast, a second group proposes different models and particular algorithms to solve the problem avoiding the explicit identification of all these faces. These techniques have been applied more or less successfully. Nonetheless, the new paradigm is still unsatisfactory and incomplete to a certain extent. One of these challenges is that related to measuring technical inefficiency in the context of oriented models, i.e., models that aim at changing inputs or outputs but not both. In this paper, we show that existing specific techniques for determining the least distance without identifying explicitly the frontier structure for graph measures, which change inputs and outputs at the same time, do not work for oriented models. Consequently, a new methodology for satisfactorily implementing these situations is proposed. Finally, the new approach is empirically checked by using a recent PISA database consisting of 902 schools

    The determination of the least distance to the strongly efficient frontier in Data Envelopment Analysis oriented models: modelling and computational aspects

    Get PDF
    Determining the least distance to the efficient frontier for estimating technical inefficiency, with the consequent determination of closest targets, has been one of the relevant issues in recent Data Envelopment Analysis literature. This new paradigm contrasts with traditional approaches, which yield furthest targets. In this respect, some techniques have been proposed in order to implement the new paradigm. A group of these techniques is based on identifying all the efficient faces of the polyhedral production possibility set and, therefore, is associated with the resolution of a NP-hard problem. In contrast, a second group proposes different models and particular algorithms to solve the problem avoiding the explicit identification of all these faces. These techniques have been applied more or less successfully. Nonetheless, the new paradigm is still unsatisfactory and incomplete to a certain extent. One of these challenges is that related to measuring technical inefficiency in the context of oriented models, i.e., models that aim at changing inputs or outputs but not both. In this paper, we show that existing specific techniques for determining the least distance without identifying explicitly the frontier structure for graph measures, which change inputs and outputs at the same time, do not work for oriented models. Consequently, a new methodology for satisfactorily implementing these situations is proposed. Finally, the new approach is empirically checked by using a recent PISA database consisting of 902 schools

    Medición de la eficiencia y la productividad: Aspectos computacionales

    Get PDF
    Programa de Doctorado en Economía (DECiDE)The purpose of efficiency and productivity problems is based on evaluating whether the use of the resources available (inputs) by a company or public institution (in general, any decision-making unit) corresponds or not with the optimal way of operating in such a way as to generate the largest possible number of outputs. To carry out this type of calculations, several mathematical models have already been proposed in the specialized literature that can be used, all of which are based on Mathematical Programming problems, and, in particular, some of them correspond to Mixed Integer Linear Programming problems (MILP). These types of problems combine several types of variables, continuous and discrete, in the same mathematical model as well as numerous restrictions, depending on the nature of the problem; features that can make the resolution process somewhat difficult. In addition, it is worth noting that these problems tend to be combinatorial in practice (NP-hard). Throughout this work, the analysis and study will focus on a field within the area of Operations Research called Data Envelopment Analysis (DEA), whose main objective is the estimation of production frontiers and the measurement of productive efficiency. Different optimization models belonging to this field will be put to the test in this thesis from a purely computational perspective, being solved through different techniques, both 2 exact and approximate, analyzing the performance and the difficulty of the same. The main objective of this work does not lie in the development and modeling of new problems in the field of DEA, but in how to achieve optimal solutions in a reasonable time for certain problems of a combinatorial nature, given that being NP-hard type problems, as the size of the problem grows, so does the difficulty of obtaining optimal solutions, especially in a short time. At this point, we will focus on the study and design of approximation techniques, known in the literature as Metaheuristics, closely linked to Machine Learning or Artificial Intelligence methodologies. In addition to these methodologies, based on learning and improving the solutions obtained, parallelization techniques have also been incorporated, capable of efficiently reducing the time needed to obtain optimal solutions in complex problems.La finalidad de los problemas de eficiencia y productividad se basan en evaluar si el uso de los recursos (entradas o inputs, en inglés) disponibles por parte de una empresa o institución pública (en general, cualquier unidad tomadora de decisiones) se corresponde o no con la forma óptima de operar de dicha entidad, generando la mayor cantidad de salidas posible (outputs en inglés). Para llevar a cabo este tipo de cálculos, varios modelos matemáticos han sido ya planteados en la literatura especializada que pueden ser utilizados, teniendo en común todos ellos que están basados en problemas de Programación Matemática, y, en particular, algunos de ellos se corresponden con problemas de Programación Matemática Lineal Mixta (Mixed Integer Linear Programming en inglés – MILP). Este tipo de problemas combinan en un mismo modelo matemático varios tipos de variables, continuas y discretas, así como numerosas restricciones, dependiendo de la naturaleza del problema, siendo estas restricciones características que pueden hacer que el proceso de resolución resulte ser algo difícil. Además, cabe destacar la característica de que estos problemas suelen ser en la práctica de tipo combinatorio (NP-duros). A lo largo de este trabajo, el análisis y el estudio se va a centrar en un campo dentro del área de Investigación Operativa denominado Análisis Envolvente de Datos (Data Envelopment Analysis en inglés - DEA), cuyo principal objetivo es el de la estimación de fronteras de producción y la medición de la eficiencia productiva. Diferentes modelos de optimización pertenecientes a este ámbito serán puestos a prueba en esta tesis desde una perspectiva puramente computacional, siendo resueltos a través de diferentes técnicas, tanto exactas como de aproximación, analizando el rendimiento y la dificultad del mismo. El objetivo principal de este trabajo no reside en el desarrollo y modelado de nuevos problemas en el ámbito del DEA, sino en cómo conseguir soluciones óptimas y eficientes en un tiempo razonable para ciertos problemas de naturaleza combinatoria, dado que al ser problemas de tipo NP-duro, a medida que el tamaño del problema crece, también lo hace la dificultad de obtener soluciones óptimas, sobre todo en un tiempo reducido. En este punto, centraremos la atención en el estudio y diseño de técnicas de aproximación, conocidas en la literatura como Metaheurísticas, estando muy ligadas a metodologías de Machine Learning o Artificial Inteligence. Además de estas metodologías, basadas en el aprendizaje y la mejora de las soluciones obtenidas, también se han incorporado técnicas de paralelismo, capaces de reducir de forma eficiente el tiempo necesario para obtener soluciones óptimas en problemas complejos

    Synthesis, Interdiction, and Protection of Layered Networks

    Get PDF
    This research developed the foundation, theory, and framework for a set of analysis techniques to assist decision makers in analyzing questions regarding the synthesis, interdiction, and protection of infrastructure networks. This includes extension of traditional network interdiction to directly model nodal interdiction; new techniques to identify potential targets in social networks based on extensions of shortest path network interdiction; extension of traditional network interdiction to include layered network formulations; and develops models/techniques to design robust layered networks while considering trade-offs with cost. These approaches identify the maximum protection/disruption possible across layered networks with limited resources, find the most robust layered network design possible given the budget limitations while ensuring that the demands are met, include traditional social network analysis, and incorporate new techniques to model the interdiction of nodes and edges throughout the formulations. In addition, the importance and effects of multiple optimal solutions for these (and similar) models is investigated. All the models developed are demonstrated on notional examples and were tested on a range of sample problem sets

    Progressively interactive evolutionary multiobjective optimization

    Get PDF
    A complete optimization procedure for a multi-objective problem essentially comprises of search and decision making. Depending upon how the search and decision making task is integrated, algorithms can be classified into various categories. Following `a decision making after search' approach, which is common with evolutionary multi-objective optimization algorithms, requires to produce all the possible alternatives before a decision can be taken. This, with the intricacies involved in producing the entire Pareto-front, is not a wise approach for high objective problems. Rather, for such kind of problems, the most preferred point on the front should be the target. In this study we propose and evaluate algorithms where search and decision making tasks work in tandem and the most preferred solution is the outcome. For the two tasks to work simultaneously, an interaction of the decision maker with the algorithm is necessary, therefore, preference information from the decision maker is accepted periodically by the algorithm and progress towards the most preferred point is made. Two different progressively interactive procedures have been suggested in the dissertation which can be integrated with any existing evolutionary multi-objective optimization algorithm to improve its effectiveness in handling high objective problems by making it capable to accept preference information at the intermediate steps of the algorithm. A number of high objective un-constrained as well as constrained problems have been successfully solved using the procedures. One of the less explored and difficult domains, i.e., bilevel multiobjective optimization has also been targeted and a solution methodology has been proposed. Initially, the bilevel multi-objective optimization problem has been solved by developing a hybrid bilevel evolutionary multi-objective optimization algorithm. Thereafter, the progressively interactive procedure has been incorporated in the algorithm leading to an increased accuracy and savings in computational cost. The efficacy of using a progressively interactive approach for solving difficult multi-objective problems has, therefore, further been justifie

    Transmission and interconnection planning in power systems: Contributions to investment under uncertainty and cross-border cost allocation.

    Get PDF
    <p>Electricity transmission network investments are playing a key role in the integration process of power systems in the European Union. Given the magnitude of investment costs, their irreversibility, and their impact in the overall development of a region, accounting for the role of uncertainties as well as the involvement of multiple parties in the decision process allows for improved and more robust investment decisions. Even though the creation of this internal energy market requires attention to flexibility and strategic decision-making, existing literature and practitioners have not given proper attention to these topics. Using portfolios of real options, we present two stochastic mixed integer linear programming models for transmission network expansion planning. We study the importance of explicitly addressing uncertainties, the option to postpone decisions and other sources of flexibility in the design of transmission networks. In a case study based on the Azores archipelago we show how renewables penetration can increase by introducing contingency planning into the decision process considering generation capacity uncertainty. We also present a two-party Nash-Coase bargaining transmission capacity investment model. We illustrate optimal fair share cost allocation policies with a case study based on the Iberian market. Lastly, we develop a new model that considers both interconnection expansion planning under uncertainty and cross-border cost allocation based on portfolios of real options and Nash-Coase bargaining. The model is illustrated using Iberian transmission and market data.</p

    A contribution to support decision making in energy/water sypply chain optimisation

    Get PDF
    The seeking of process sustainability forces enterprises to change their operations. Additionally, the industrial globalization implies a very dynamic market that, among other issues, promotes the enterprises competition. Therefore, the efficient control and use of their Key Performance Indicators, including profitability, cost reduction, demand satisfaction and environmental impact associated to the development of new products, is a significant challenge. All the above indicators can be efficiently controlled through the Supply Chain Management. Thus, companies work towards the optimization of their individual operations under competitive environments taking advantage of the flexibility provided by the virtually inexistent world market restrictions. This is achieved by the coordination of the resource flows, across all the entities and echelons belonging to the system network. Nevertheless, such coordination is significantly complicated if considering the presence of uncertainty and even more if seeking for a win-win outcome. The purpose of this thesis is extending the current decision making strategies to expedite these tasks in industrial processes. Such a contribution is based on the development of efficient mathematical models that allows coordinating large amount of information synchronizing the production and distribution tasks in terms of economic, environmental and social criteria. This thesis starts presents an overview of the requirements of sustainable production processes, describing and analyzing the current methods and tools used and identifying the most relevant open issues. All the above is always within the framework of Process System Engineering literature. The second part of this thesis is focused in stressing the current Multi-Objective solution strategies. During this part, first explores how the profitability of the Supply Chain can be enhanced by considering simultaneously multiple objectives under demand uncertainties. Particularly, solution frameworks have been proposed in which different multi-criteria decision making strategies have been combined with stochastic approaches. Furthermore, additional performance indicators (including financial and operational ones) have been included in the same solution framework to evaluate its capabilities. This framework was also applied to decentralized supply chains problems in order to explore its capabilities to produce solution that improves the performances of each one of the SC entities simultaneously. Consequently, a new generalized mathematical formulation which integrates many performance indicators in the production process within a supply chain is efficiently solved. Afterwards, the third part of the thesis extends the proposed solution framework to address the uncertainty management. Particularly, the consideration of different types and sources of uncertainty (e.g. external and internal ones) where considered, through the implementation of preventive approaches. This part also explores the use of solution strategies that efficiently selects the number of scenarios that represent the uncertainty conditions. Finally, the importance and effect of each uncertainty source over the process performance is detailed analyzed through the use of surrogate models that promote the sensitivity analysis of those uncertainties. The third part of this thesis is focused on the integration of the above multi-objective and uncertainty approaches for the optimization of a sustainable Supply Chain. Besides the integration of different solution approaches, this part also considers the integration of hierarchical decision levels, by the exploitation of mathematical models that assess the consequences of considering simultaneously design and planning decisions under centralized and decentralized Supply Chains. Finally, the last part of this thesis provides the final conclusions and further work to be developed.La globalización industrial genera un ambiente dinámico en los mercados que, entre otras cosas, promueve la competencia entre corporaciones. Por lo tanto, el uso eficiente de las los indicadores de rendimiento, incluyendo rentabilidad, satisfacción de la demanda y en general el impacto ambiental, representa un area de oportunidad importante. El control de estos indicadores tiene un efecto positivo si se combinan con la gestión de cadena de suministro. Por lo tanto, las compañías buscan definir sus operaciones para permanecer activas dentro de un ambiente competitivo, tomando en cuenta las restricciones en el mercado mundial. Lo anterior puede ser logrado mediante la coordinación de los flujos de recursos a través de todas las entidades y escalones pertenecientes a la red del sistema. Sin embargo, dicha coordinación se complica significativamente si se quiere considerar la presencia de incertidumbre, y aún más, si se busca exclusivamente un ganar-ganar. El propósito de esta tesis es extender el alcance de las estrategias de toma de decisiones con el fin de facilitar estas tareas dentro de procesos industriales. Estas contribuciones se basan en el desarrollo de modelos matemáticos eficientes que permitan coordinar grandes cantidades de información sincronizando las tareas de producción y distribución en términos económicos, ambientales y sociales. Esta tesis inicia presentando una visión global de los requerimientos de un proceso de producción sostenible, describiendo y analizando los métodos y herramientas actuales así como identificando las áreas de oportunidad más relevantes dentro del marco de ingeniería de procesos La segunda parte se enfoca en enfatizar las capacidades de las estrategias de solución multi-objetivo, durante la cual, se explora el mejoramiento de la rentabilidad de la cadena de suministro considerando múltiples objetivos bajo incertidumbres en la demanda. Particularmente, diferentes marcos de solución han sido propuestos en los que varias estrategias de toma de decisión multi-criterio han sido combinadas con aproximaciones estocásticas. Por otra parte, indicadores de rendimiento (incluyendo financiero y operacional) han sido incluidos en el mismo marco de solución para evaluar sus capacidades. Este marco fue aplicado también a problemas de cadenas de suministro descentralizados con el fin de explorar sus capacidades de producir soluciones que mejoran simultáneamente el rendimiento para cada uno de las entidades dentro de la cadena de suministro. Consecuentemente, una nueva formulación que integra varios indicadores de rendimiento en los procesos de producción fue propuesta y validada. La tercera parte de la tesis extiende el marco de solución propuesto para abordar el manejo de incertidumbres. Particularmente, la consideración de diferentes tipos y fuentes de incertidumbre (p.ej. externos e internos) fueron considerados, mediante la implementación de aproximaciones preventivas. Esta parte también explora el uso de estrategias de solución que elige eficientemente el número de escenarios necesario que representan las condiciones inciertas. Finalmente, la importancia y efecto de cada una de las fuentes de incertidumbre sobre el rendimiento del proceso es analizado en detalle mediante el uso de meta modelos que promueven el análisis de sensibilidad de dichas incertidumbres. La tercera parte de esta tesis se enfoca en la integración de las metodologías de multi-objetivo e incertidumbre anteriormente expuestas para la optimización de cadenas de suministro sostenibles. Además de la integración de diferentes métodos. Esta parte también considera la integración de diferentes niveles jerárquicos de decisión, mediante el aprovechamiento de modelos matemáticos que evalúan lasconsecuencias de considerar simultáneamente las decisiones de diseño y planeación de una cadena de suministro centralizada y descentralizada. La parte final de la tesis detalla las conclusiones y el trabajo a futuro necesario sobre esta línea de investigaciónPostprint (published version

    Multimodel Operability Framework for Design of Modular and Intensified Energy Systems

    Get PDF
    In this dissertation, a novel operability framework is introduced for the process design of modular and intensified energy systems that are challenged by complexity and highly constrained environments. Previously developed process operability approaches are reviewed and further developed in terms of theory, application, and software infrastructure. An optimization-based multilayer operability framework is introduced for process design of nonlinear energy systems. In the first layer of this framework, a mixed-integer linear programming (MILP)-based iterative algorithm considers the minimization of footprint and achievement of process intensification targets. Then, in the second layer, an operability analysis is performed to incorporate key features of optimality and feasibility accounting for the system achievability and flexibility. The outcome of this framework consists of a set of modular designs, considering both the aspects of size and process operability. For this study and throughout this dissertation, the nonlinear system is represented by multiple linearized models, which results in lower computational expense and more efficient quantification of operability regions. A systematic techno-economic analysis framework is also proposed for costing intensified modular systems. Conventional costing techniques are extended to allow estimation of capital and operating costs of modular units. Economy of learning concepts are included to consider the effect of experience curves on purchase costs. Profitability measures are scaled with respect to production of a chemical of interest for comparison with plants of traditional scale. Scenarios in which the modular technology presents break-even or further reduction in cost when compared to the traditional process are identified as a result. A framework for the development of process operability algorithms is provided as a software infrastructure outcome. Generated codes from the developed approaches are included in an open-source platform that will give researchers from academia and industry access to the algorithms. This platform has the purpose of dissemination and future improvement of process operability algorithms and methods. To show versatility and efficacy of the developed approaches, a variety of applications are considered as follows: a membrane reactor for direct methane aromatization conversion to hydrogen and benzene (DMA-MR), the classical shower problem in process operability, a power plant cycling application for power generation with penetration of renewable energy sources, and a newly developed modular hydrogen unit. Applications to DMA-MR subsystems demonstrate employment of the multilayer framework to find a region with modular design candidates, which are then ranked according to an operability index. The most operable design is determined and contrasted with the optimal design with respect to process intensification in terms of footprint minimization, showing that optimality at fixed nominal operations does not necessarily ensure the best system operability. For the modular hydrogen unit application, the developed process operability framework provides guidelines for obtaining modular designs that are highly integrated and flexible with respect to disturbances in inlet natural gas composition. The modular hydrogen unit is also used for demonstration of the proposed techno-economic analysis framework. A comparison with a benchmark conventional steam methane reforming plant shows that the modular hydrogen unit can benefit from the economy of learning. An assembled modular steam methane reforming plant is used to map the decrease in natural gas price that must be needed for the plant to break even when compared to traditional technologies. Scenarios in which the natural gas price is low allow break-even cost for both individual hydrogen units and the assembled modular plant. The economy of learning must produce a reduction of 40% or less in capital cost when the natural gas price is under 0.02 US$/Sm3. This result suggests that the synthesized modular hydrogen process has potential to be economically feasible under these conditions. The developed tools can be used to accelerate the deployment and manufacturing of standardized modular energy systems

    SOLVING TWO-LEVEL OPTIMIZATION PROBLEMS WITH APPLICATIONS TO ROBUST DESIGN AND ENERGY MARKETS

    Get PDF
    This dissertation provides efficient techniques to solve two-level optimization problems. Three specific types of problems are considered. The first problem is robust optimization, which has direct applications to engineering design. Traditionally robust optimization problems have been solved using an inner-outer structure, which can be computationally expensive. This dissertation provides a method to decompose and solve this two-level structure using a modified Benders decomposition. This gradient-based technique is applicable to robust optimization problems with quasiconvex constraints and provides approximate solutions to problems with nonlinear constraints. The second types of two-level problems considered are mathematical and equilibrium programs with equilibrium constraints. Their two-level structure is simplified using Schur's decomposition and reformulation schemes for absolute value functions. The resulting formulations are applicable to game theory problems in operations research and economics. The third type of two-level problem studied is discretely-constrained mixed linear complementarity problems. These are first formulated into a two-level mathematical program with equilibrium constraints and then solved using the aforementioned technique for mathematical and equilibrium programs with equilibrium constraints. The techniques for all three problems help simplify the two-level structure into one level, which helps gain numerical and application insights. The computational effort for solving these problems is greatly reduced using the techniques in this dissertation. Finally, a host of numerical examples are presented to verify the approaches. Diverse applications to economics, operations research, and engineering design motivate the relevance of the novel methods developed in this dissertation
    • …
    corecore