540 research outputs found

    Water Resources Decision Making Under Uncertainty

    Get PDF
    Uncertainty is in part about variability in relation to the physical characteristics of water resources systems. But uncertainty is also about ambiguity (Simonovic, 2009). Both variability and ambiguity are associated with a lack of clarity because of the behaviour of all system components, a lack of data, a lack of detail, a lack of structure to consider water resources management problems, working and framing assumptions being used to consider the problems, known and unknown sources of bias, and ignorance about how much effort it is worth expending to clarify the management situation. Climate change, addressed in this research project (CFCAS, 2008), is another important source of uncertainty that contributes to the variability in the input variables for water resources management. This report presents a set of examples that illustrate (a) probabilistic and (b) fuzzy set approaches for solving various water resources management problems. The main goal of this report is to demonstrate how information provided to water resources decision makers can be improved by using the tools that incorporate risk and uncertainty. The uncertainty associated with water resources decision making problems is quantified using probabilistic and fuzzy set approaches. A set of selected examples are presented to illustrate the application of probabilistic and fuzzy simulation, optimization, and multi-objective analysis to water resources design, planning and operations. Selected examples include dike design, sewer pipe design, optimal operations of a single purpose reservoir, and planning of a multi-purpose reservoir system. Demonstrated probabilistic and fuzzy tools can be easily adapted to many other water resources decision making problems.https://ir.lib.uwo.ca/wrrr/1035/thumbnail.jp

    Data Imputation Using Differential Dependency and Fuzzy Multi-Objective Linear Programming

    Get PDF
    Missing or incomplete data is a serious problem when it comes to collecting and analyzing data for forecasting, estimating, and decision making. Since data quality is so important in machine learning and its results, in most cases data imputation is much more appropriate than ignoring them. Missing data imputation is often based on considering equality, similarity, or distance of neighbors. Researchers use different approaches for neighbors\u27 equalities or similarities. Every approach has its advantages and limitations. Instead of equality, some researchers use inequalities together with a few relationships or similarity rules. In this thesis, after recalling some basic imputation methods, we discus about data imputation based on differential dependencies (DDs). DDs are conditional rules in which the closeness of the values of each pair of tuples in some attribute indicates the closeness of the values of those tuples in another attribute. Considering these rules, a few rows are created for each incomplete row and placed in the set of candidates for that row. Then from each set one row is selected such that they are not incompatible with each other. These selections are made by an integer linear programming (ILP) model. In this thesis, first, we propose an algorithm to generate DDs. Then in order to improve the previous approaches to increase the percentage of imputation, we suggest fuzzy relaxation that allows a little violation from DDs. Finally, we propose a multi-objective fuzzy linear programming to reach an imputation with more percentage of imputation in addition to decrease the summation of violations. A variety of datasets from “Kaggle” is used to support our approach

    The Applicability of Multiple MCDM Techniques for Implementation in the Priority of Road Maintenance

    Get PDF
    Priority of road maintenance can be viewed as a process influenced by decision-makers with varying decision-making power. Each decision-maker may have their view and judgment depending on their function and responsibilities. Therefore, determining the priority of road maintenance can be thought of as a process of MCDM. Regarding the priority of road maintenance, this is a difficult MCDM problem involving uncertainty, qualitative criteria, and possible causal relationships between choice criteria. This paper aims to examine the applicability of multiple MCDM techniques, which are used for assessing the priority of road maintenance, by adapting them to this sector. Priority of road maintenance problems subject to internal uncertainty caused by imprecise human judgments will be reviewed and investigated, as well as the most popular theories and methods in group MCDM for presenting uncertain information, creating weights for decision criteria, examining causal relationships, and ranking alternatives. The study concluded that through the strengths and weaknesses reached, fuzzy set theory is the most appropriate and best used in modeling uncertain information. In addition, the methods that are employed the most common in the literature that has been done to explore the correlations between decision criteria have been examined, and it is concluded that the fuzzy best-worst method may be utilized in this research. The Fuzzy VIKOR approach is most likely the best method for ranking the decision alternatives.

    Multi-objective optimisation under deep uncertainty

    Get PDF
    Most of the decisions in real-life problems need to be made in the absence of complete knowledge about the consequences of the decision. Furthermore, in some of these problems, the probability and/or the number of different outcomes are also unknown (named deep uncertainty). Therefore, all the probability-based approaches (such as stochastic programming) are unable to address these problems. On the other hand, involving various stakeholders with different (possibly conflicting) criteria in the problems brings additional complexity. The main aim and primary motivation for writing this thesis have been to deal with deep uncertainty in Multi-Criteria Decision-Making (MCDM) problems, especially with long-term decision-making processes such as strategic planning problems. To achieve these aims, we first introduced a two-stage scenario-based structure for dealing with deep uncertainty in Multi-Objective Optimisation (MOO)/MCDM problems. The proposed method extends the concept of two-stage stochastic programming with recourse to address the capability of dealing with deep uncertainty through the use of scenario planning rather than statistical expectation. In this research, scenarios are used as a dimension of preference (a component of what we term the meta-criteria) to avoid problems relating to the assessment and use of probabilities under deep uncertainty. Such scenario-based thinking involved a multi-objective representation of performance under different future conditions as an alternative to expectation, which fitted naturally into the broader multi-objective problem context. To aggregate these objectives of the problem, the Generalised Goal Programming (GGP) approach is used. Due to the capability of this approach to handle large numbers of objective functions/criteria, the GGP is significantly useful in the proposed framework. Identifying the goals for each criterion is the only action that the Decision Maker (DM) needs to take without needing to investigate the trade-offs between different criteria. Moreover, the proposed two-stage framework has been expanded to a three-stage structure and a moving horizon concept to handle the existing deep uncertainty in more complex problems, such as strategic planning. As strategic planning problems will deal with more than two stages and real processes are continuous, it follows that more scenarios will continuously be unfolded that may or may not be periodic. "Stages", in this study, are artificial constructs to structure thinking of an indefinite future. A suitable length of the planning window and stages in the proposed methodology are also investigated. Philosophically, the proposed two-stage structure always plans and looks one step ahead while the three-stage structure considers the conditions and consequences of two upcoming steps in advance, which fits well with our primary objective. Ignoring long-term consequences of decisions as well as likely conditions could not be a robust strategic approach. Therefore, generally, by utilising the three-stage structure, we may expect a more robust decision than with a two-stage representation. Modelling time preferences in multi-stage problems have also been introduced to solve the fundamental problem of comparability of the two proposed methodologies because of the different time horizon, as the two-stage model is ignorant of the third stage. This concept has been applied by a differential weighting in models. Importance weights, then, are primarily used to make the two- and three-stage models more directly comparable, and only secondarily as a measure of risk preference. Differential weighting can help us apply further preferences in the model and lead it to generate more preferred solutions. Expanding the proposed structure to the problems with more than three stages which usually have too many meta-scenarios may lead us to a computationally expensive model that cannot easily be solved, if it all. Moreover, extension to a planning horizon that too long will not result in an exact plan, as nothing in nature is predictable to this level of detail, and we are always surprised by new events. Therefore, beyond the expensive computation in a multi-stage structure for more than three stages, defining plausible scenarios for far stages is not logical and even impossible. Therefore, the moving horizon models in a T-stage planning window has been introduced. To be able to run and evaluate the proposed two- and three-stage moving horizon frameworks in longer planning horizons, we need to identify all plausible meta-scenarios. However, with the assumption of deep uncertainty, this identification is almost impossible. On the other hand, even with a finite set of plausible meta-scenarios, comparing and computing the results in all plausible meta-scenarios are hardly possible, because the size of the model grows exponentially by raising the length of the planning horizon. Furthermore, analysis of the solutions requires hundreds or thousands of multi-objective comparisons that are not easily conceivable, if it all. These issues motivated us to perform a Simulation-Optimisation study to simulate the reasonable number of meta-scenarios and enable evaluation, comparison and analysis of the proposed methods for the problems with a T-stage planning horizon. In this Simulation-Optimisation study, we started by setting the current scenario, the scenario that we were facing it at the beginning of the period. Then, the optimisation model was run to get the first-stage decisions which can implement immediately. Thereafter, the next scenario was randomly generated by using Monte Carlo simulation methods. In deep uncertainty, we do not have enough knowledge about the likelihood of plausible scenarios nor the probability space; therefore, to simulate the deep uncertainty we shall not use anything of scenario likelihoods in the decision models. The two- and three-stage Simulation-Optimisation algorithms were also proposed. A comparison of these algorithms showed that the solutions to the two-stage moving horizon model are feasible to the other pattern (three-stage). Also, the optimal solution to the three-stage moving horizon model is not dominated by any solutions of the other model. So, with no doubt, it must find better, or at least the same, goal achievement compared to the two-stage moving horizon model. Accordingly, the three-stage moving horizon model evaluates and compares the optimal solution of the corresponding two-stage moving horizon model to the other feasible solutions, then, if it selects anything else it must either be better in goal achievement or be robust in some future scenarios or a combination of both. However, the cost of these supremacies must be considered (as it may lead us to a computationally expensive problem), and the efficiency of applying this structure needs to be approved. Obviously, using the three-stage structure in comparison with the two-stage approach brings more complexity and calculations to the models. It is also shown that the solutions to the three-stage model would be preferred to the solutions provided by the two-stage model under most circumstances. However, by the "efficiency" of the three-stage framework in our context, we want to know that whether utilising this approach and its solutions is worth the expense of the additional complexity and computation. The experiments in this study showed that the three-stage model has advantages under most circumstances(meta-scenarios), but that the gains are quite modest. This issue is frequently observed when comparing these methods in problems with a short-term (say less than five stages) planning window. Nevertheless, analysis of the length of the planning horizon and its effects on the solutions to the proposed frameworks indicate that utilising the three-stage models is more efficient for longer periods because the differences between the solutions of the two proposed structures increase by any iteration of the algorithms in moving horizon models. Moreover, during the long-term calculations, we noticed that the two-stage algorithm failed to find the optimal solutions for some iterations while the three-stage algorithm found the optimal value in all cases. Thus, it seems that for the planning horizons with more than ten stages, the efficiency of the three-stage model be may worth the expenses of the complexity and computation. Nevertheless, if the DM prefers to not use the three-stage structure because of the complexity and/or calculations, the two-stage moving horizon model can provide us with some reasonable solutions, although they might not be as good as the solutions generated by a three-stage framework. Finally, to examine the power of the proposed methodology in real cases, the proposed two-stage structure was applied in the sugarcane industry to analyse the whole infrastructure of the sugar and bioethanol Supply Chain (SC) in such a way that all economics (Max profit), environmental (Min CO₂), and social benefits (Max job-creations) were optimised under six key uncertainties, namely sugarcane yield, ethanol and refined sugar demands and prices, and the exchange rate. Moreover, one of the critical design questions - that is, to design the optimal number and technologies as well as the best place(s) for setting up the ethanol plant(s) - was also addressed in this study. The general model for the strategic planning of sugar- bioethanol supply chains (SC) under deep uncertainty was formulated and also examined in a case study based on the South African Sugar Industry. This problem is formulated as a Scenario-Based Mixed-Integer Two-Stage Multi-Objective Optimisation problem and solved by utilising the Generalised Goal Programming Approach. To sum up, the proposed methodology is, to the best of our knowledge, a novel approach that can successfully handle the deep uncertainty in MCDM/MOO problems with both short- and long-term planning horizons. It is generic enough to use in all MCDM problems under deep uncertainty. However, in this thesis, the proposed structure only applied in Linear Problems (LP). Non-linear problems would be an important direction for future research. Different solution methods may also need to be examined to solve the non-linear problems. Moreover, many other real-world optimisation and decision-making applications can be considered to examine the proposed method in the future

    GIS-based approach for optimization of onshore wind park infrastructure alignment in Finland

    Get PDF
    Wind power is a rapidly developing, low-emission form of energy production. In Fin-land, the official objective is to increase wind power capacity from the current 1 005 MW up to 3 500–4 000 MW by 2025. By the end of April 2015, the total capacity of all wind power project being planned in Finland had surpassed 11 000 MW. As the amount of projects in Finland is record high, an increasing amount of infrastructure is also being planned and constructed. Traditionally, these planning operations are conducted using manual and labor-intensive work methods that are prone to subjectivity. This study introduces a GIS-based methodology for determining optimal paths to sup-port the planning of onshore wind park infrastructure alignment in Nordanå-Lövböle wind park located on the island of Kemiönsaari in Southwest Finland. The presented methodology utilizes a least-cost path (LCP) algorithm for searching of optimal paths within a high resolution real-world terrain dataset derived from airborne lidar scannings. In addition, planning data is used to provide a realistic planning framework for the anal-ysis. In order to produce realistic results, the physiographic and planning datasets are standardized and weighted according to qualitative suitability assessments by utilizing methods and practices offered by multi-criteria evaluation (MCE). The results are pre-sented as scenarios to correspond various different planning objectives. Finally, the methodology is documented by using tools of Business Process Management (BPM). The results show that the presented methodology can be effectively used to search and identify extensive, 20 to 35 kilometers long networks of paths that correspond to certain optimization objectives in the study area. The utilization of high-resolution terrain data produces a more objective and more detailed path alignment plan. This study demon-strates that the presented methodology can be practically applied to support a wind power infrastructure alignment planning process. The six-phase structure of the method-ology allows straightforward incorporation of different optimization objectives. The methodology responds well to combining quantitative and qualitative data. Additional-ly, the careful documentation presents an example of how the methodology can be eval-uated and developed as a business process. This thesis also shows that more emphasis on the research of algorithm-based, more objective methods for the planning of infrastruc-ture alignment is desirable, as technological development has only recently started to realize the potential of these computational methods.Siirretty Doriast

    Machine Learning in Aerodynamic Shape Optimization

    Get PDF
    Machine learning (ML) has been increasingly used to aid aerodynamic shape optimization (ASO), thanks to the availability of aerodynamic data and continued developments in deep learning. We review the applications of ML in ASO to date and provide a perspective on the state-of-the-art and future directions. We first introduce conventional ASO and current challenges. Next, we introduce ML fundamentals and detail ML algorithms that have been successful in ASO. Then, we review ML applications to ASO addressing three aspects: compact geometric design space, fast aerodynamic analysis, and efficient optimization architecture. In addition to providing a comprehensive summary of the research, we comment on the practicality and effectiveness of the developed methods. We show how cutting-edge ML approaches can benefit ASO and address challenging demands, such as interactive design optimization. Practical large-scale design optimizations remain a challenge because of the high cost of ML training. Further research on coupling ML model construction with prior experience and knowledge, such as physics-informed ML, is recommended to solve large-scale ASO problems
    corecore