5,478 research outputs found

    Optimization of a dynamic supply portfolio considering risks and discount’s constraints

    Get PDF
    Purpose: Nowadays finding reliable suppliers in the global supply chains has become so important for success, because reliable suppliers would lead to a reliable supply and besides that orders of customer are met effectively . Yet, there is little empirical evidence to support this view, hence the purpose of this paper is to fill this need by considering risk in order to find the optimum supply portfolio. Design/methodology/approach: This paper proposes a multi objective model for the supplier selection portfolio problem that uses conditional value at risk (CVaR) criteria to control the risks of delayed, disrupted and defected supplies via scenario analysis. Also we consider discount’s constraints which are common assumptions in supplier selection problems. The proposed approach is capable of determining the optimal supply portfolio by calculating value-at-risk and minimizing conditional value-at-risk. In this study the Reservation Level driven Tchebycheff Procedure (RLTP) which is one of the reference point methods, is used to solve small size of our model through coding in GAMS. As our model is NP-hard; a meta-heuristic approach, Non-dominated Sorting Genetic Algorithm (NSGA) which is one of the most efficient methods for optimizing multi objective models, is applied to solve large scales of our model. Findings and Originality/value: In order to find a dynamic supply portfolio, we developed a Mixed Integer Linear Programming (MILP) model which contains two objectives. One objective minimizes the cost and the other minimizes the risks of delayed, disrupted and defected supplies. CVaR is used as the risk controlling method which emphases on low-probability, high-consequence events. Discount option as a common offer from suppliers is also implanted in the proposed model. Our findings show that the proposed model can help in optimization of a dynamic supplier selection portfolio with controlling the corresponding risks for large scales of real word problems. Practical implications: To approve the capability of our model various numerical examples are made and non-dominated solutions are generated. Sensitive analysis is made for determination of the most important factors. The results shows that how a dynamic supply portfolio would disperse the allocation of orders among the suppliers combined with the allocation of orders among the planning periods, in order to hedge against the risks of delayed, disrupted and defected supplies. Originality/value: This paper provides a novel multi objective model for supplier selection portfolio problem that is capable of controlling delayed, disrupted and defected supplies via scenario analysis. Also discounts, as an option offered from suppliers, are embedded in the model. Due to the large size of the real problems in the field of supplier selection portfolio a meta-heuristic method, NSGA II, is presented for solving the multi objective model. The chromosome represented for the proposed solving methodology is unique and is another contribution of this paper which showed to be adaptive with the essence of supplier selection portfolio problemPeer Reviewe

    A multi objective volleyball premier league algorithm for green scheduling identical parallel machines with splitting jobs

    Get PDF
    Parallel machine scheduling is one of the most common studied problems in recent years, however, this classic optimization problem has to achieve two conflicting objectives, i.e. minimizing the total tardiness and minimizing the total wastes, if the scheduling is done in the context of plastic injection industry where jobs are splitting and molds are important constraints. This paper proposes a mathematical model for scheduling parallel machines with splitting jobs and resource constraints. Two minimization objectives - the total tardiness and the number of waste - are considered, simultaneously. The obtained model is a bi-objective integer linear programming model that is shown to be of NP-hard class optimization problems. In this paper, a novel Multi-Objective Volleyball Premier League (MOVPL) algorithm is presented for solving the aforementioned problem. This algorithm uses the crowding distance concept used in NSGA-II as an extension of the Volleyball Premier League (VPL) that we recently introduced. Furthermore, the results are compared with six multi-objective metaheuristic algorithms of MOPSO, NSGA-II, MOGWO, MOALO, MOEA/D, and SPEA2. Using five standard metrics and ten test problems, the performance of the Pareto-based algorithms was investigated. The results demonstrate that in general, the proposed algorithm has supremacy than the other four algorithms

    The relevance of outsourcing and leagile strategies in performance optimization of an integrated process planning and scheduling

    Get PDF
    Over the past few years growing global competition has forced the manufacturing industries to upgrade their old production strategies with the modern day approaches. As a result, recent interest has been developed towards finding an appropriate policy that could enable them to compete with others, and facilitate them to emerge as a market winner. Keeping in mind the abovementioned facts, in this paper the authors have proposed an integrated process planning and scheduling model inheriting the salient features of outsourcing, and leagile principles to compete in the existing market scenario. The paper also proposes a model based on leagile principles, where the integrated planning management has been practiced. In the present work a scheduling problem has been considered and overall minimization of makespan has been aimed. The paper shows the relevance of both the strategies in performance enhancement of the industries, in terms of their reduced makespan. The authors have also proposed a new hybrid Enhanced Swift Converging Simulated Annealing (ESCSA) algorithm, to solve the complex real-time scheduling problems. The proposed algorithm inherits the prominent features of the Genetic Algorithm (GA), Simulated Annealing (SA), and the Fuzzy Logic Controller (FLC). The ESCSA algorithm reduces the makespan significantly in less computational time and number of iterations. The efficacy of the proposed algorithm has been shown by comparing the results with GA, SA, Tabu, and hybrid Tabu-SA optimization methods

    Integrating the Cost of Quality into Multi-Products Multi-Components Supply Chain Network Design

    Get PDF
    More than ever before the success of a company heavily depends on its supply chain and how efficient the network. A supply chain needs to be configured in such a manner as to minimize cost while still maintaining a good quality level to satisfy the end user and to be efficient, designing for the network and the whole chain is important. Including the cost of quality into the process of designing the network can be rewording and revealing. In this research the concept of cost of quality as a performance measure was integrated into the supply chain network designing process for a supply chain concerned with multi products multi components. This research discusses how this supply chain can be mathematically modeled, solutions for the resulted model and finally studied the effect of the inclusion of the quality as a parameter on the result of the deigning process. Nonlinear mixed integer mathematical model was developed for the problem and for solving the model two solutions based on Genetic algorithm and Tabu Search were developed and compared. The results and analysis show that the solution based on the Genetic algorithm outperforms the Tabu Search based solution especially in large size problems. In addition, the analysis showed that the inclusion of the cost of quality into the model effect the designing process and changes the resultant routes

    Software reliability through fault-avoidance and fault-tolerance

    Get PDF
    The use of back-to-back, or comparison, testing for regression test or porting is examined. The efficiency and the cost of the strategy is compared with manual and table-driven single version testing. Some of the key parameters that influence the efficiency and the cost of the approach are the failure identification effort during single version program testing, the extent of implemented changes, the nature of the regression test data (e.g., random), and the nature of the inter-version failure correlation and fault-masking. The advantages and disadvantages of the technique are discussed, together with some suggestions concerning its practical use

    The Multiobjective Average Network Flow Problem: Formulations, Algorithms, Heuristics, and Complexity

    Get PDF
    Integrating value focused thinking with the shortest path problem results in a unique formulation called the multiobjective average shortest path problem. We prove this is NP-complete for general graphs. For directed acyclic graphs, an efficient algorithm and even faster heuristic are proposed. While the worst case error of the heuristic is proven unbounded, its average performance on random graphs is within 3% of the optimal solution. Additionally, a special case of the more general biobjective average shortest path problem is given, allowing tradeoffs between decreases in arc set cardinality and increases in multiobjective value; the algorithm to solve the average shortest path problem provides all the information needed to solve this more difficult biobjective problem. These concepts are then extended to the minimum cost flow problem creating a new formulation we name the multiobjective average minimum cost flow. This problem is proven NP-complete as well. For directed acyclic graphs, two efficient heuristics are developed, and although we prove the error of any successive average shortest path heuristic is in theory unbounded, they both perform very well on random graphs. Furthermore, we define a general biobjective average minimum cost flow problem. The information from the heuristics can be used to estimate the efficient frontier in a special case of this problem trading off total flow and multiobjective value. Finally, several variants of these two problems are discussed. Proofs are conjectured showing the conditions under which the problems are solvable in polynomial time and when they remain NP-complete

    Meta-heuristic algorithms in car engine design: a literature survey

    Get PDF
    Meta-heuristic algorithms are often inspired by natural phenomena, including the evolution of species in Darwinian natural selection theory, ant behaviors in biology, flock behaviors of some birds, and annealing in metallurgy. Due to their great potential in solving difficult optimization problems, meta-heuristic algorithms have found their way into automobile engine design. There are different optimization problems arising in different areas of car engine management including calibration, control system, fault diagnosis, and modeling. In this paper we review the state-of-the-art applications of different meta-heuristic algorithms in engine management systems. The review covers a wide range of research, including the application of meta-heuristic algorithms in engine calibration, optimizing engine control systems, engine fault diagnosis, and optimizing different parts of engines and modeling. The meta-heuristic algorithms reviewed in this paper include evolutionary algorithms, evolution strategy, evolutionary programming, genetic programming, differential evolution, estimation of distribution algorithm, ant colony optimization, particle swarm optimization, memetic algorithms, and artificial immune system

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    An integrated model for green partner selection and supply chain construction

    Get PDF
    Stricter governmental regulations and rising public awareness of environmental issues are pressurising firms to make their supply chains greener. Partner selection is a critical activity in constructing a green supply chain because the environmental performance of the whole supply chain is significantly affected by all its constituents. The paper presents a model for green partner selection and supply chain construction by combining analytic network process (ANP) and multi-objective programming (MOP) methodologies. The model offers a new way of solving the green partner selection and supply chain construction problem both effectively and efficiently as it enables decision-makers to simultaneously minimize the negative environmental impact of the supply chain whilst maximizing its business performance. The paper also develops an additional decision-making tool in the form of the environmental difference, the business difference and the eco-efficiency ratio which quantify the trade-offs between environmental and business performance. The applicability and practicability of the model is demonstrated in an illustration of its use in the Chinese electrical appliance and equipment manufacturing industry
    corecore