2,746 research outputs found

    Uncertain Multi-Criteria Optimization Problems

    Get PDF
    Most real-world search and optimization problems naturally involve multiple criteria as objectives. Generally, symmetry, asymmetry, and anti-symmetry are basic characteristics of binary relationships used when modeling optimization problems. Moreover, the notion of symmetry has appeared in many articles about uncertainty theories that are employed in multi-criteria problems. Different solutions may produce trade-offs (conflicting scenarios) among different objectives. A better solution with respect to one objective may compromise other objectives. There are various factors that need to be considered to address the problems in multidisciplinary research, which is critical for the overall sustainability of human development and activity. In this regard, in recent decades, decision-making theory has been the subject of intense research activities due to its wide applications in different areas. The decision-making theory approach has become an important means to provide real-time solutions to uncertainty problems. Theories such as probability theory, fuzzy set theory, type-2 fuzzy set theory, rough set, and uncertainty theory, available in the existing literature, deal with such uncertainties. Nevertheless, the uncertain multi-criteria characteristics in such problems have not yet been explored in depth, and there is much left to be achieved in this direction. Hence, different mathematical models of real-life multi-criteria optimization problems can be developed in various uncertain frameworks with special emphasis on optimization problems

    A systematic review on multi-criteria group decision-making methods based on weights: analysis and classification scheme

    Get PDF
    Interest in group decision-making (GDM) has been increasing prominently over the last decade. Access to global databases, sophisticated sensors which can obtain multiple inputs or complex problems requiring opinions from several experts have driven interest in data aggregation. Consequently, the field has been widely studied from several viewpoints and multiple approaches have been proposed. Nevertheless, there is a lack of general framework. Moreover, this problem is exacerbated in the case of experts’ weighting methods, one of the most widely-used techniques to deal with multiple source aggregation. This lack of general classification scheme, or a guide to assist expert knowledge, leads to ambiguity or misreading for readers, who may be overwhelmed by the large amount of unclassified information currently available. To invert this situation, a general GDM framework is presented which divides and classifies all data aggregation techniques, focusing on and expanding the classification of experts’ weighting methods in terms of analysis type by carrying out an in-depth literature review. Results are not only classified but analysed and discussed regarding multiple characteristics, such as MCDMs in which they are applied, type of data used, ideal solutions considered or when they are applied. Furthermore, general requirements supplement this analysis such as initial influence, or component division considerations. As a result, this paper provides not only a general classification scheme and a detailed analysis of experts’ weighting methods but also a road map for researchers working on GDM topics or a guide for experts who use these methods. Furthermore, six significant contributions for future research pathways are provided in the conclusions.The first author acknowledges support from the Spanish Ministry of Universities [grant number FPU18/01471]. The second and third author wish to recognize their support from the Serra Hunter program. Finally, this work was supported by the Catalan agency AGAUR through its research group support program (2017SGR00227). This research is part of the R&D project IAQ4EDU, reference no. PID2020-117366RB-I00, funded by MCIN/AEI/10.13039/ 501100011033.Peer ReviewedPostprint (published version

    Dominance intensity measure within fuzzy weight oriented MAUT: an application

    Get PDF
    We introduce a dominance intensity measuring method to derive a ranking of alternatives to deal with incomplete information in multi-criteria decision-making problems on the basis of multi-attribute utility theory (MAUT) and fuzzy sets theory. We consider the situation where there is imprecision concerning decision-makers’ preferences, and imprecise weights are represented by trapezoidal fuzzy weights.The proposed method is based on the dominance values between pairs of alternatives. These values can be computed by linear programming, as an additive multi-attribute utility model is used to rate the alternatives. Dominance values are then transformed into dominance intensity measures, used to rank the alternatives under consideration. Distances between fuzzy numbers based on the generalization of the left and right fuzzy numbers are utilized to account for fuzzy weights. An example concerning the selection of intervention strategies to restore an aquatic ecosystem contaminated by radionuclides illustrates the approach. Monte Carlo simulation techniques have been used to show that the proposed method performs well for different imprecision levels in terms of a hit ratio and a rank-order correlation measure

    Multiobjective Simulation Optimization Using Enhanced Evolutionary Algorithm Approaches

    Get PDF
    In today\u27s competitive business environment, a firm\u27s ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or noisy ) values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, black-box objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms\u27 performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications

    Stochastic multiple attribute decision making with Pythagorean hesitant fuzzy set based on regret theory

    Get PDF
    The objective of this paper is to present an extended approach to address the stochastic multi-attribute decision-making problem. The novelty of this study is to consider the regret behavior of decision makers under a Pythagorean hesitant fuzzy environment. First, the group satisfaction degree of decision-making matrices is used to consider the different preferences of decision-makers. Second, the nonlinear programming model under different statues is provided to compute the weights of attributes. Then, based on the regret theory, a regret value matrix and a rejoice value matrix are constructed. Furthermore, the feasibility and superiority of the developed approach is proven by an illustrative example of selecting an air fighter. Eventually, a comparative analysis with other methods shows the advantages of the proposed methods

    Developing collaborative planning support tools for optimised farming in Western Australia

    Get PDF
    Land-use (farm) planning is a highly complex and dynamic process. A land-use plan can be optimal at one point in time, but its currency can change quickly due to the dynamic nature of the variables driving the land-use decision-making process. These include external drivers such as weather and produce markets, that also interact with the biophysical interactions and management activities of crop production.The active environment of an annual farm planning process can be envisioned as being cone-like. At the beginning of the sowing year, the number of options open to the manager is huge, although uncertainty is high due to the inability to foresee future weather and market conditions. As the production year reveals itself, the uncertainties around weather and markets become more certain, as does the impact of weather and management activities on future production levels. This restricts the number of alternative management options available to the farm manager. Moreover, every decision made, such as crop type sown in a paddock, will constrains the range of management activities possible in that paddock for the rest of the growing season.This research has developed a prototype Land-use Decision Support System (LUDSS) to aid farm managers in their tactical farm management decision making. The prototype applies an innovative approach that mimics the way in which a farm manager and/or consultant would search for optimal solutions at a whole-farm level. This model captured the range of possible management activities available to the manager and the impact that both external (to the farm) and internal drivers have on crop production and the environment. It also captured the risk and uncertainty found in the decision space.The developed prototype is based on a Multiple Objective Decision-making (MODM) - ĂĄ Posteriori approach incorporating an Exhaustive Search method. The objective set used for the model is: maximising profit and minimising environmental impact. Pareto optimisation theory was chosen as the method to select the optimal solution and a Monte Carlo simulator is integrated into the prototype to incorporate the dynamic nature of the farm decision making process. The prototype has a user-friendly front and back end to allow farmers to input data, drive the application and extract information easily

    Probabilistic hesitant fuzzy multiple attribute decisionmaking based on regret theory for the evaluation of venture capital projects

    Get PDF
    The selection of venture capital investment projects is one of the most important decision-making activities for venture capitalists. Due to the complexity of investment market and the limited cognition of people, most of the venture capital investment decision problems are highly uncertain and the venture capitalists are often bounded rational under uncertainty. To address such problems, this article presents an approach based on regret theory to probabilistic hesitant fuzzy multiple attribute decision-making. Firstly, when the information on the occurrence probabilities of all the elements in the probabilistic hesitant fuzzy element (P.H.F.E.) is unknown or partially known, two different mathematical programming models based on water-filling theory and the maximum entropy principle are provided to handle these complex situations. Secondly, to capture the psychological behaviours of venture capitalists, the regret theory is utilised to solve the problem of selection of venture capital investment projects. Finally, comparative analysis with the existing approaches is conducted to demonstrate the feasibility and applicability of the proposed method

    A Hybrid Tabu/Scatter Search Algorithm for Simulation-Based Optimization of Multi-Objective Runway Operations Scheduling

    Get PDF
    As air traffic continues to increase, air traffic flow management is becoming more challenging to effectively and efficiently utilize airport capacity without compromising safety, environmental and economic requirements. Since runways are often the primary limiting factor in airport capacity, runway operations scheduling emerge as an important problem to be solved to alleviate flight delays and air traffic congestion while reducing unnecessary fuel consumption and negative environmental impacts. However, even a moderately sized real-life runway operations scheduling problem tends to be too complex to be solved by analytical methods, where all mathematical models for this problem belong to the complexity class of NP-Hard in a strong sense due to combinatorial nature of the problem. Therefore, it is only possible to solve practical runway operations scheduling problem by making a large number of simplifications and assumptions in a deterministic context. As a result, most analytical models proposed in the literature suffer from too much abstraction, avoid uncertainties and, in turn, have little applicability in practice. On the other hand, simulation-based methods have the capability to characterize complex and stochastic real-life runway operations in detail, and to cope with several constraints and stakeholders’ preferences, which are commonly considered as important factors in practice. This dissertation proposes a simulation-based optimization (SbO) approach for multi-objective runway operations scheduling problem. The SbO approach utilizes a discrete-event simulation model for accounting for uncertain conditions, and an optimization component for finding the best known Pareto set of solutions. This approach explicitly considers uncertainty to decrease the real operational cost of the runway operations as well as fairness among aircraft as part of the optimization process. Due to the problem’s large, complex and unstructured search space, a hybrid Tabu/Scatter Search algorithm is developed to find solutions by using an elitist strategy to preserve non-dominated solutions, a dynamic update mechanism to produce high-quality solutions and a rebuilding strategy to promote solution diversity. The proposed algorithm is applied to bi-objective (i.e., maximizing runway utilization and fairness) runway operations schedule optimization as the optimization component of the SbO framework, where the developed simulation model acts as an external function evaluator. To the best of our knowledge, this is the first SbO approach that explicitly considers uncertainties in the development of schedules for runway operations as well as considers fairness as a secondary objective. In addition, computational experiments are conducted using real-life datasets for a major US airport to demonstrate that the proposed approach is effective and computationally tractable in a practical sense. In the experimental design, statistical design of experiments method is employed to analyze the impacts of parameters on the simulation as well as on the optimization component’s performance, and to identify the appropriate parameter levels. The results show that the implementation of the proposed SbO approach provides operational benefits when compared to First-Come-First-Served (FCFS) and deterministic approaches without compromising schedule fairness. It is also shown that proposed algorithm is capable of generating a set of solutions that represent the inherent trade-offs between the objectives that are considered. The proposed decision-making algorithm might be used as part of decision support tools to aid air traffic controllers in solving the real-life runway operations scheduling problem

    Observation of temporary accommodation for construction workers according to the code of practice for temporary construction site workers amenities and accommodation (ms2593:2015) in Johor, Malaysia

    Get PDF
    The Malaysian government is currently improving the quality of workers temporary accommodation by introducing MS2593:2015 (Code of Practice for Temporary Site Workers Amenities and Accommodation) in 2015. It is in line with the initiative in the Construction Industry Transformation Programme (2016-2020) to increase the quality and well-being of construction workers in Malaysia. Thus, to gauge the current practice of temporary accommodation on complying with the particular guideline, this paper has put forth the observation of such accommodation towards elements in Section 3 within MS2593:2015. A total of seventeen (17) temporary accommodation provided by Grade 6 and Grade 7 contractors in Johor were selected and assessed. The results disclosed that most of the temporary accommodation was not complying with the guideline, where only thirteen (13) out of fifty-eight (58) elements have recorded full compliance (100%), and the lowest compliance percentage (5.9%) are discovered in the Section 3.12 (Signage). In a nutshell, given the significant gap of compliance between current practices of temporary accommodation and MS2593:2015, a holistic initiative need to be in place for the guideline to be worthwhile

    INCORPORATING TRAVEL TIME RELIABILITY INTO TRANSPORTATION NETWORK MODELING

    Get PDF
    Travel time reliability is deemed as one of the most important factors affecting travelers’ route choice decisions. However, existing practices mostly consider average travel time only. This dissertation establishes a methodology framework to overcome such limitation. Semi-standard deviation is first proposed as the measure of reliability to quantify the risk under uncertain conditions on the network. This measure only accounts for travel times that exceed certain pre-specified benchmark, which offers a better behavioral interpretation and theoretical foundation than some currently used measures such as standard deviation and the probability of on-time arrival. Two path finding models are then developed by integrating both average travel time and semi-standard deviation. The single objective model tries to minimize the weighted sum of average travel time and semi-standard deviation, while the multi-objective model treats them as separate objectives and seeks to minimize them simultaneously. The multi-objective formulation is preferred to the single objective model, because it eliminates the need for prior knowledge of reliability ratios. It offers an additional benefit of providing multiple attractive paths for traveler’s further decision making. The sampling based approach using archived travel time data is applied to derive the path semi-standard deviation. The approach provides a nice workaround to the problem that there is no exact solution to analytically derive the measure. Through this process, the correlation structure can be implicitly accounted for while simultaneously avoiding the complicated link travel time distribution fitting and convolution process. Furthermore, the metaheuristic algorithm and stochastic dominance based approach are adapted to solve the proposed models. Both approaches address the issue where classical shortest path algorithms are not applicable due to non-additive semi-standard deviation. However, the stochastic dominance based approach is preferred because it is more computationally efficient and can always find the true optimal paths. In addition to semi-standard deviation, on-time arrival probability and scheduling delay measures are also investigated. Although these three measures share similar mathematical structures, they exhibit different behaviors in response to large deviations from the pre-specified travel time benchmark. Theoretical connections between these measures and the first three stochastic dominance rules are also established. This enables us to incorporate on-time arrival probability and scheduling delay measures into the methodology framework as well
    • 

    corecore