234 research outputs found

    Fuzzy-multi-mode Resource-constrained Discrete Time-cost-resource Optimization in Project Scheduling Using ENSCBO

    Get PDF
    Construction companies are required to employ effective methods of project planning and scheduling in today's competitive environment. Time and cost are critical factors in project success, and they can vary based on the type and amount of resources used for activities, such as labor, tools, and materials. In addition, resource leveling strategies that are used to limit fluctuations in a project's resource consumption also affect project time and cost. The multi-mode resource-constrained discrete-time–cost-resource optimization (MRC-DTCRO) is an optimization tool that is developed for scheduling of a set of activities involving multiple execution modes with the aim of minimizing time, cost, and resource moment. Moreover, uncertainty in cost should be accounted for in project planning because activities are exposed to risks that can cause delays and budget overruns. This paper presents a fuzzy-multi-mode resource-constrained discrete-time–cost-resource optimization (F-MRC-DTCRO) model for the time-cost-resource moment tradeoff in a fuzzy environment while satisfying all the project constraints. In the proposed model, fuzzy numbers are used to characterize the uncertainty of direct cost of activities. Using this model, different risk acceptance levels of the decision maker can be addressed in the optimization process. A newly developed multi-objective optimization algorithm called ENSCBO is used to search non-dominated solutions to the fuzzy multi-objective model. Finally, the developed model is applied to solve a benchmark test problem. The results indicate that incorporating the fuzzy structure of uncertainty in costs to previously developed MRC-DTCRO models facilitates the decision-making process and provides more realistic solutions

    Initialization of a Multi-objective Evolutionary Algorithms Knowledge Acquisition System for Renewable Energy Power Plants

    Get PDF
    pp. 185-204The design of Renewable Energy Power Plants (REPPs) is crucial not only for the investments' performance and attractiveness measures, but also for the maximization of resource (source) usage (e.g. sun, water, and wind) and the minimization of raw materials (e.g. aluminum: Al, cadmium: Cd, iron: Fe, silicon: Si, and tellurium: Te) consumption. Hence, several appropriate and satisfactory Multi-objective Problems (MOPs) are mandatory during the REPPs' design phases. MOPs related tasks can only be managed by very well organized knowledge acquisition on all REPPs' design equations and models. The proposed MOPs need to be solved with one or more multiobjective algorithm, such as Multi-objective Evolutionary Algorithms (MOEAs). In this respect, the first aim of this research study is to start gathering knowledge on the REPPs' MOPs. The second aim of this study is to gather detailed information about all MOEAs and available free software tools for their development. The main contribution of this research is the initialization of a proposed multi-objective evolutionary algorithm knowledge acquisition system for renewable energy power plants (MOEAs-KAS-FREPPs) (research and development loopwise process: develop, train, validate, improve, test, improve, operate, and improve). As a simple representative example of this knowledge acquisition system research with two selective and elective proposed standard objectives (as test objectives) and eight selective and elective proposed standard constraints (as test constraints) are generated and applied as a standardized MOP for a virtual small hydropower plant design and investment. The maximization of energy generation (MWh) and the minimization of initial investment cost (million €) are achieved by the Multi-objective Genetic Algorithm (MOGA), the Niched Sharing Genetic Algorithm/Non-dominated Sorting Genetic Algorithm (NSGA-I), and the NSGA-II algorithms in the Scilab 6.0.0 as only three standardized MOEAs amongst all proposed standardized MOEAs on two desktop computer configurations (Windows 10 Home 1709 64 bits, Intel i5-7200 CPU @ 2.7 GHz, 8.00 GB RAM with internet connection and Windows 10 Pro, Intel(R) Core(TM) i5 CPU 650 @ 3.20 GHz, 6,00 GB RAM with internet connection). The algorithm run-times (computation time) of the current applications vary between 20.64 and 59.98 seconds.S

    Multiobjective genetic algorithm approaches to project scheduling under risk

    Get PDF
    In this thesis, project scheduling under risk is chosen as the topic of research. Project scheduling under risk is defined as a biobjective decision problem and is formulated as a 0-1 integer mathematical programming model. In this biobjective formulation, one of the objectives is taken as the expected makespan minimization and the other is taken as the expected cost minimization. As the solution approach to this biobjective formulation genetic algorithm (GA) is chosen. After carefully investigating the multiobjective GA literature, two strategies based on the vector evaluated GA are developed and a new GA is proposed. For these three GAs first the parameters are investigated through statistical experimentation and then the values are decided upon. The chosen parameters are used for the computational study part of this thesis. In this thesis three improvement heuristics are developed also to further improve the GA solutions. The aim of these improvement heuristics is to decrease the expected cost of the project while keeping the expected duration of the project fixed. These improvement heuristics are implemented at the end of the proposed GA and used to improve the results of the proposed GA. Finally the GAs and improvement heuristics are tested on three different sets of problems. The results are evaluated by pairwise comparisons of algorithms and of heuristics. Also an approximation of the true Pareto front is generated using the commercial mathematical modelling program, GAMS. The results are compared to that approximation and they seem comparable to that solution. The results of the improvement heuristics are also compared against each other and the performance of the heuristics is reported in detail

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Energy-aware scheduling in heterogeneous computing systems

    Get PDF
    In the last decade, the grid computing systems emerged as useful provider of the computing power required for solving complex problems. The classic formulation of the scheduling problem in heterogeneous computing systems is NP-hard, thus approximation techniques are required for solving real-world scenarios of this problem. This thesis tackles the problem of scheduling tasks in a heterogeneous computing environment in reduced execution times, considering the schedule length and the total energy consumption as the optimization objectives. An efficient multithreading local search algorithm for solving the multi-objective scheduling problem in heterogeneous computing systems, named MEMLS, is presented. The proposed method follows a fully multi-objective approach, applying a Pareto-based dominance search that is executed in parallel by using several threads. The experimental analysis demonstrates that the new multithreading algorithm outperforms a set of fast and accurate two-phase deterministic heuristics based on the traditional MinMin. The new ME-MLS method is able to achieve significant improvements in both makespan and energy consumption objectives in reduced execution times for a large set of testbed instances, while exhibiting very good scalability. The ME-MLS was evaluated solving instances comprised of up to 2048 tasks and 64 machines. In order to scale the dimension of the problem instances even further and tackle large-sized problem instances, the Graphical Processing Unit (GPU) architecture is considered. This line of future work has been initially tackled with the gPALS: a hybrid CPU/GPU local search algorithm for efficiently tackling a single-objective heterogeneous computing scheduling problem. The gPALS shows very promising results, being able to tackle instances of up to 32768 tasks and 1024 machines in reasonable execution times.En la última década, los sistemas de computación grid se han convertido en útiles proveedores de la capacidad de cálculo necesaria para la resolución de problemas complejos. En su formulación clásica, el problema de la planificación de tareas en sistemas heterogéneos es un problema NP difícil, por lo que se requieren técnicas de resolución aproximadas para atacar instancias de tamaño realista de este problema. Esta tesis aborda el problema de la planificación de tareas en sistemas heterogéneos, considerando el largo de la planificación y el consumo energético como objetivos a optimizar. Para la resolución de este problema se propone un algoritmo de búsqueda local eficiente y multihilo. El método propuesto se trata de un enfoque plenamente multiobjetivo que consiste en la aplicación de una búsqueda basada en dominancia de Pareto que se ejecuta en paralelo mediante el uso de varios hilos de ejecución. El análisis experimental demuestra que el algoritmo multithilado propuesto supera a un conjunto de heurísticas deterministas rápidas y e caces basadas en el algoritmo MinMin tradicional. El nuevo método, ME-MLS, es capaz de lograr mejoras significativas tanto en el largo de la planificación y como en consumo energético, en tiempos de ejecución reducidos para un gran número de casos de prueba, mientras que exhibe una escalabilidad muy promisoria. El ME-MLS fue evaluado abordando instancias de hasta 2048 tareas y 64 máquinas. Con el n de aumentar la dimensión de las instancias abordadas y hacer frente a instancias de gran tamaño, se consideró la utilización de la arquitectura provista por las unidades de procesamiento gráfico (GPU). Esta línea de trabajo futuro ha sido abordada inicialmente con el algoritmo gPALS: un algoritmo híbrido CPU/GPU de búsqueda local para la planificación de tareas en en sistemas heterogéneos considerando el largo de la planificación como único objetivo. La evaluación del algoritmo gPALS ha mostrado resultados muy prometedores, siendo capaz de abordar instancias de hasta 32768 tareas y 1024 máquinas en tiempos de ejecución razonables

    Application of nature-inspired optimization algorithms to improve the production efficiency of small and medium-sized bakeries

    Get PDF
    Increasing production efficiency through schedule optimization is one of the most influential topics in operations research that contributes to decision-making process. It is the concept of allocating tasks among available resources within the constraints of any manufacturing facility in order to minimize costs. It is carried out by a model that resembles real-world task distribution with variables and relevant constraints in order to complete a planned production. In addition to a model, an optimizer is required to assist in evaluating and improving the task allocation procedure in order to maximize overall production efficiency. The entire procedure is usually carried out on a computer, where these two distinct segments combine to form a solution framework for production planning and support decision-making in various manufacturing industries. Small and medium-sized bakeries lack access to cutting-edge tools, and most of their production schedules are based on personal experience. This makes a significant difference in production costs when compared to the large bakeries, as evidenced by their market dominance. In this study, a hybrid no-wait flow shop model is proposed to produce a production schedule based on actual data, featuring the constraints of the production environment in small and medium-sized bakeries. Several single-objective and multi-objective nature-inspired optimization algorithms were implemented to find efficient production schedules. While makespan is the most widely used quality criterion of production efficiency because it dominates production costs, high oven idle time in bakeries also wastes energy. Combining these quality criteria allows for additional cost reduction due to energy savings as well as shorter production time. Therefore, to obtain the efficient production plan, makespan and oven idle time were included in the objectives of optimization. To find the optimal production planning for an existing production line, particle swarm optimization, simulated annealing, and the Nawaz-Enscore-Ham algorithms were used. The weighting factor method was used to combine two objectives into a single objective. The classical optimization algorithms were found to be good enough at finding optimal schedules in a reasonable amount of time, reducing makespan by 29 % and oven idle time by 8 % of one of the analyzed production datasets. Nonetheless, the algorithms convergence was found to be poor, with a lower probability of obtaining the best or nearly the best result. In contrast, a modified particle swarm optimization (MPSO) proposed in this study demonstrated significant improvement in convergence with a higher probability of obtaining better results. To obtain trade-offs between two objectives, state-of-the-art multi-objective optimization algorithms, non-dominated sorting genetic algorithm (NSGA-II), strength Pareto evolutionary algorithm, generalized differential evolution, improved multi-objective particle swarm optimization (OMOPSO) and speed-constrained multi-objective particle swarm optimization (SMPSO) were implemented. Optimization algorithms provided efficient production planning with up to a 12 % reduction in makespan and a 26 % reduction in oven idle time based on data from different production days. The performance comparison revealed a significant difference between these multi-objective optimization algorithms, with NSGA-II performing best and OMOPSO and SMPSO performing worst. Proofing is a key processing stage that contributes to the quality of the final product by developing flavor and fluffiness texture in bread. However, the duration of proofing is uncertain due to the complex interaction of multiple parameters: yeast condition, temperature in the proofing chamber, and chemical composition of flour. Due to the uncertainty of proofing time, a production plan optimized with the shortest makespan can be significantly inefficient. The computational results show that the schedules with the shortest and nearly shortest makespan have a significant (up to 18 %) increase in makespan due to proofing time deviation from expected duration. In this thesis, a method for developing resilient production planning that takes into account uncertain proofing time is proposed, so that even if the deviation in proofing time is extreme, the fluctuation in makespan is minimal. The experimental results with a production dataset revealed a proactive production plan, with only 5 minutes longer than the shortest makespan, but only 21 min fluctuating in makespan due to varying the proofing time from -10 % to +10 % of actual proofing time. This study proposed a common framework for small and medium-sized bakeries to improve their production efficiency in three steps: collecting production data, simulating production planning with the hybrid no-wait flow shop model, and running the optimization algorithm. The study suggests to use MPSO for solving single objective optimization problem and NSGA-II for multi-objective optimization problem. Based on real bakery production data, the results revealed that existing plans were significantly inefficient and could be optimized in a reasonable computational time using a robust optimization algorithm. Implementing such a framework in small and medium-sized bakery manufacturing operations could help to achieve an efficient and resilient production system.Die Steigerung der Produktionseffizienz durch die Optimierung von Arbeitsplänen ist eines der am meisten erforschten Themen im Bereich der Unternehmensplanung, die zur Entscheidungsfindung beiträgt. Es handelt sich dabei um die Aufteilung von Aufgaben auf die verfügbaren Ressourcen innerhalb der Beschränkungen einer Produktionsanlage mit dem Ziel der Kostenminimierung. Diese Optimierung von Arbeitsplänen wird mit Hilfe eines Modells durchgeführt, das die Aufgabenverteilung in der realen Welt mit Variablen und relevanten Einschränkungen nachbildet, um die Produktion zu simulieren. Zusätzlich zu einem Modell sind Optimierungsverfahren erforderlich, die bei der Bewertung und Verbesserung der Aufgabenverteilung helfen, um eine effiziente Gesamtproduktion zu erzielen. Das gesamte Verfahren wird in der Regel auf einem Computer durchgeführt, wobei diese beiden unterschiedlichen Komponenten (Modell und Optimierungsverfahren) zusammen einen Lösungsrahmen für die Produktionsplanung bilden und die Entscheidungsfindung in verschiedenen Fertigungsindustrien unterstützen. Kleine und mittelgroße Bäckereien haben zumeist keinen Zugang zu den modernsten Werkzeugen und die meisten ihrer Produktionspläne beruhen auf persönlichen Erfahrungen. Dies macht einen erheblichen Unterschied bei den Produktionskosten im Vergleich zu den großen Bäckereien aus, was sich in deren Marktdominanz widerspiegelt. In dieser Studie wird ein hybrides No-Wait-Flow-Shop-Modell vorgeschlagen, um einen Produktionsplan auf der Grundlage tatsächlicher Daten zu erstellen, der die Beschränkungen der Produktionsumgebung in kleinen und mittleren Bäckereien berücksichtigt. Mehrere einzel- und mehrzielorientierte, von der Natur inspirierte Optimierungsalgorithmen wurden implementiert, um effiziente Produktionspläne zu berechnen. Die Minimierung der Produktionsdauer ist das am häufigsten verwendete Qualitätskriterium für die Produktionseffizienz, da sie die Produktionskosten dominiert. Jedoch wird in Bäckereien durch hohe Leerlaufzeiten der Öfen Energie verschwendet was wiederum die Produktionskosten erhöht. Die Kombination beider Qualitätskriterien (minimale Produktionskosten, minimale Leerlaufzeiten der Öfen) ermöglicht eine zusätzliche Kostenreduzierung durch Energieeinsparungen und kurze Produktionszeiten. Um einen effizienten Produktionsplan zu erhalten, wurden daher die Minimierung der Produktionsdauer und der Ofenleerlaufzeit in die Optimierungsziele einbezogen. Um optimale Produktionspläne für bestehende Produktionsprozesse von Bäckereien zu ermitteln, wurden folgende Algorithmen untersucht: Particle Swarm Optimization, Simulated Annealing und Nawaz-Enscore-Ham. Die Methode der Gewichtung wurde verwendet, um zwei Ziele zu einem einzigen Ziel zu kombinieren. Die Optimierungsalgorithmen erwiesen sich als gut genug, um in angemessener Zeit optimale Pläne zu berechnen, wobei bei einem untersuchten Datensatz die Produktionsdauer um 29 % und die Leerlaufzeit des Ofens um 8 % reduziert wurde. Allerdings erwies sich die Konvergenz der Algorithmen als unzureichend, da nur mit einer geringen Wahrscheinlichkeit das beste oder nahezu beste Ergebnis berechnet wurde. Im Gegensatz dazu zeigte der in dieser Studie ebenfalls untersuchte modifizierte Particle-swarm-Optimierungsalgorithmus (mPSO) eine deutliche Verbesserung der Konvergenz mit einer höheren Wahrscheinlichkeit, bessere Ergebnisse zu erzielen im Vergleich zu den anderen Algorithmen. Um Kompromisse zwischen zwei Zielen zu erzielen, wurden moderne Algorithmen zur Mehrzieloptimierung implementiert: Non-dominated Sorting Genetic Algorithm (NSGA-II), Strength Pareto Evolutionary Algorithm, Generalized Differential Evolution, Improved Multi-objective Particle Swarm Optimization (OMOPSO), and Speed-constrained Multi-objective Particle Swarm Optimization (SMPSO). Die Optimierungsalgorithmen ermöglichten eine effiziente Produktionsplanung mit einer Verringerung der Produktionsdauer um bis zu 12 % und einer Verringerung der Leerlaufzeit der Öfen um 26 % auf der Grundlage von Daten aus unterschiedlichen Produktionsprozessen. Der Leistungsvergleich zeigte signifikante Unterschiede zwischen diesen Mehrziel-Optimierungsalgorithmen, wobei NSGA-II am besten und OMOPSO und SMPSO am schlechtesten abschnitten. Die Gärung ist ein wichtiger Verarbeitungsschritt, der zur Qualität des Endprodukts beiträgt, indem der Geschmack und die Textur des Brotes positiv beeinflusst werden kann. Die Dauer der Gärung ist jedoch aufgrund der komplexen Interaktion von mehreren Größen abhängig wie der Hefezustand, der Temperatur in der Gärkammer und der chemischen Zusammensetzung des Mehls. Aufgrund der Variabilität der Gärzeit kann jedoch ein Produktionsplan, der auf die kürzeste Produktionszeit optimiert ist, sehr ineffizient sein. Die Berechnungsergebnisse zeigen, dass die Pläne mit der kürzesten und nahezu kürzesten Produktionsdauer eine erhebliche (bis zu 18 %) Erhöhung der Produktionsdauer aufgrund der Abweichung der Gärzeit von der erwarteten Dauer aufweisen. In dieser Arbeit wird eine Methode zur Entwicklung einer robusten Produktionsplanung vorgeschlagen, die Veränderungen in den Gärzeiten berücksichtigt, so dass selbst bei einer extremen Abweichung der Gärzeit die Schwankung der Produktionsdauer minimal ist. Die experimentellen Ergebnisse für einen Produktionsprozess ergaben einen robusten Produktionsplan, der nur 5 Minuten länger ist als die kürzeste Produktionsdauer, aber nur 21 Minuten in der Produktionsdauer schwankt, wenn die Gärzeit von -10 % bis +10 % der ermittelten Gärzeit variiert. In dieser Studie wird ein Vorgehen für kleine und mittlere Bäckereien vorgeschlagen, um ihre Produktionseffizienz in drei Schritten zu verbessern: Erfassung von Produktionsdaten, Simulation von Produktionsplänen mit dem hybrid No-Wait Flow Shop Modell und Ausführung der Optimierung. Für die Einzieloptimierung wird der mPSO-Algorithmus und für die Mehrzieloptimierung NSGA-II-Algorithmus empfohlen. Auf der Grundlage realer Bäckereiproduktionsdaten zeigten die Ergebnisse, dass die in den Bäckereien verwendeten Pläne ineffizient waren und mit Hilfe eines effizienten Optimierungsalgorithmus in einer angemessenen Rechenzeit optimiert werden konnten. Die Umsetzung eines solchen Vorgehens in kleinen und mittelgroßen Bäckereibetrieben trägt dazu bei effiziente und robuste Produktionspläne zu erstellen und somit die Wettbewerbsfähigkeit dieser Bäckereien zu erhöhen

    Optimisation du développement de nouveaux produits dans l'industrie pharmaceutique par algorithme génétique multicritère

    Get PDF
    Le développement de nouveaux produits constitue une priorité stratégique de l'industrie pharmaceutique, en raison de la présence d'incertitudes, de la lourdeur des investissements mis en jeu, de l'interdépendance entre projets, de la disponibilité limitée des ressources, du nombre très élevé de décisions impliquées dû à la longueur des processus (de l'ordre d'une dizaine d'années) et de la nature combinatoire du problème. Formellement, le problème se pose ainsi : sélectionner des projets de Ret D parmi des projets candidats pour satisfaire plusieurs critères (rentabilité économique, temps de mise sur le marché) tout en considérant leur nature incertaine. Plus précisément, les points clés récurrents sont relatifs à la détermination des projets à développer une fois que les molécules cibles sont identifiées, leur ordre de traitement et le niveau de ressources à affecter. Dans ce contexte, une approche basée sur le couplage entre un simulateur à événements discrets stochastique (approche Monte Carlo) pour représenter la dynamique du système et un algorithme d'optimisation multicritère (de type NSGA II) pour choisir les produits est proposée. Un modèle par objets développé précédemment pour la conception et l'ordonnancement d'ateliers discontinus, de réutilisation aisée tant par les aspects de structure que de logique de fonctionnement, a été étendu pour intégrer le cas de la gestion de nouveaux produits. Deux cas d'étude illustrent et valident l'approche. Les résultats de simulation ont mis en évidence l'intérêt de trois critères d'évaluation de performance pour l'aide à la décision : le bénéfice actualisé d'une séquence, le risque associé et le temps de mise sur le marché. Ils ont été utilisés dans la formulation multiobjectif du problème d'optimisation. Dans ce contexte, des algorithmes génétiques sont particulièrement intéressants en raison de leur capacité à conduire directement au front de Pareto et à traiter l'aspect combinatoire. La variante NSGA II a été adaptée au problème pour prendre en compte à la fois le nombre et l'ordre de lancement des produits dans une séquence. A partir d'une analyse bicritère réalisée pour un cas d'étude représentatif sur différentes paires de critères pour l'optimisation bi- et tri-critère, la stratégie d'optimisation s'avère efficace et particulièrement élitiste pour détecter les séquences à considérer par le décideur. Seules quelques séquences sont détectées. Parmi elles, les portefeuilles à nombre élevé de produits provoquent des attentes et des retards au lancement ; ils sont éliminés par la stratégie d'optimistaion bicritère. Les petits portefeuilles qui réduisent les files d'attente et le temps de lancement sont ainsi préférés. Le temps se révèle un critère important à optimiser simultanément, mettant en évidence tout l'intérêt d'une optimisation tricritère. Enfin, l'ordre de lancement des produits est une variable majeure comme pour les problèmes d'ordonnancement d'atelier. ABSTRACT : New Product Development (NPD) constitutes a challenging problem in the pharmaceutical industry, due to the characteristics of the development pipeline, namely, the presence of uncertainty, the high level of the involved capital costs, the interdependency between projects, the limited availability of resources, the overwhelming number of decisions due to the length of the time horizon (about 10 years) and the combinatorial nature of a portfolio. Formally, the NPD problem can be stated as follows: select a set of R and D projects from a pool of candidate projects in order to satisfy several criteria (economic profitability, time to market) while copying with the uncertain nature of the projects. More precisely, the recurrent key issues are to determine the projects to develop once target molecules have been identified, their order and the level of resources to assign. In this context, the proposed approach combines discrete event stochastic simulation (Monte Carlo approach) with multiobjective genetic algorithms (NSGA II type, Non-Sorted Genetic Algorithm II) to optimize the highly combinatorial portfolio management problem. An object-oriented model previously developed for batch plant scheduling and design is then extended to embed the case of new product management, which is particularly adequate for reuse of both structure and logic. Two case studies illustrate and validate the approach. From this simulation study, three performance evaluation criteria must be considered for decision making: the Net Present Value (NPV) of a sequence, its associated risk defined as the number of positive occurrences of NPV among the samples and the time to market. Theyv have been used in the multiobjective optimization formulation of the problem. In that context, Genetic Algorithms (GAs) are particularly attractive for treating this kind of problem, due to their ability to directly lead to the so-called Pareto front and to account for the combinatorial aspect. NSGA II has been adapted to the treated case for taking into account both the number of products in a sequence and the drug release order. From an analysis performed for a representative case study on the different pairs of criteria both for the bi- and tricriteria optimization, the optimization strategy turns out to be efficient and particularly elitist to detect the sequences which can be considered by the decision makers. Only a few sequences are detected. Among theses sequences, large portfolios cause resource queues and delays time to launch and are eliminated by the bicriteria optimization strategy. Small portfolio reduces queuing and time to launch appear as good candidates. The optimization strategy is interesting to detect the sequence candidates. Time is an important criterion to consider simultaneously with NPV and risk criteria. The order in which drugs are released in the pipeline is of great importance as with scheduling problems

    Multiobjective optimization of New Product Development in the pharmaceutical industry

    Get PDF
    New Product Development (NPD) constitutes a challenging problem in the pharmaceutical industry, due to the characteristics of the development pipeline, namely, the presence of uncertainty, the high level of the involved capital costs, the interdependency between projects, the limited availability of resources, the overwhelming number of decisions due to the length of the time horizon (about 10 years) and the combinatorial nature of a portfolio. Formally, the NPD problem can be stated as follows: select a set of R and D projects from a pool of candidate projects in order to satisfy several criteria (economic profitability, time to market) while copying with the uncertain nature of the projects. More precisely, the recurrent key issues are to determine the projects to develop once target molecules have been identified, their order and the level of resources to assign. In this context, the proposed approach combines discrete event stochastic simulation (Monte Carlo approach) with multiobjective genetic algorithms (NSGA II type, Non-Sorted Genetic Algorithm II) to optimize the highly combinatorial portfolio management problem. An object-oriented model previously developed for batch plant scheduling and design is then extended to embed the case of new product management, which is particularly adequate for reuse of both structure and logic. Two case studies illustrate and validate the approach. From this simulation study, three performance evaluation criteria must be considered for decision making: the Net Present Value (NPV) of a sequence, its associated risk defined as the number of positive occurrences of NPV among the samples and the time to market. Theyv have been used in the multiobjective optimization formulation of the problem. In that context, Genetic Algorithms (GAs) are particularly attractive for treating this kind of problem, due to their ability to directly lead to the so-called Pareto front and to account for the combinatorial aspect. NSGA II has been adapted to the treated case for taking into account both the number of products in a sequence and the drug release order. From an analysis performed for a representative case study on the different pairs of criteria both for the bi- and tricriteria optimization, the optimization strategy turns out to be efficient and particularly elitist to detect the sequences which can be considered by the decision makers. Only a few sequences are detected. Among theses sequences, large portfolios cause resource queues and delays time to launch and are eliminated by the bicriteria optimization strategy. Small portfolio reduces queuing and time to launch appear as good candidates. The optimization strategy is interesting to detect the sequence candidates. Time is an important criterion to consider simultaneously with NPV and risk criteria. The order in which drugs are released in the pipeline is of great importance as with scheduling problems
    corecore