77,755 research outputs found

    Multiobjective strategies for New Product Development in the pharmaceutical industry

    Get PDF
    New Product Development (NPD) constitutes a challenging problem in the pharmaceutical industry, due to the characteristics of the development pipeline. Formally, the NPD problem can be stated as follows: select a set of R&D projects from a pool of candidate projects in order to satisfy several criteria (economic profitability, time to market) while coping with the uncertain nature of the projects. More precisely, the recurrent key issues are to determine the projects to develop once target molecules have been identified, their order and the level of resources to assign. In this context, the proposed approach combines discrete event stochastic simulation (Monte Carlo approach) with multiobjective genetic algorithms (NSGAII type, Non-Sorted Genetic Algorithm II) to optimize the highly combinatorial portfolio management problem. In that context, Genetic Algorithms (GAs) are particularly attractive for treating this kind of problem, due to their ability to directly lead to the so-called Pareto front and to account for the combinatorial aspect. This work is illustrated with a study case involving nine interdependent new product candidates targeting three diseases. An analysis is performed for this test bench on the different pairs of criteria both for the bi- and tricriteria optimization: large portfolios cause resource queues and delays time to launch and are eliminated by the bi- and tricriteria optimization strategy. The optimization strategy is thus interesting to detect the sequence candidates. Time is an important criterion to consider simultaneously with NPV and risk criteria. The order in which drugs are released in the pipeline is of great importance as with scheduling problems

    Multiobjective strategies for New Product Development in the pharmaceutical industry

    Get PDF
    New Product Development (NPD) constitutes a challenging problem in the pharmaceutical industry, due to the characteristics of the development pipeline. Formally, the NPD problem can be stated as follows: select a set of R&D projects from a pool of candidate projects in order to satisfy several criteria (economic profitability, time to market) while coping with the uncertain nature of the projects. More precisely, the recurrent key issues are to determine the projects to develop once target molecules have been identified, their order and the level of resources to assign. In this context, the proposed approach combines discrete event stochastic simulation (Monte Carlo approach) with multiobjective genetic algorithms (NSGAII type, Non-Sorted Genetic Algorithm II) to optimize the highly combinatorial portfolio management problem. In that context, Genetic Algorithms (GAs) are particularly attractive for treating this kind of problem, due to their ability to directly lead to the so-called Pareto front and to account for the combinatorial aspect. This work is illustrated with a study case involving nine interdependent new product candidates targeting three diseases. An analysis is performed for this test bench on the different pairs of criteria both for the bi- and tricriteria optimization: large portfolios cause resource queues and delays time to launch and are eliminated by the bi- and tricriteria optimization strategy. The optimization strategy is thus interesting to detect the sequence candidates. Time is an important criterion to consider simultaneously with NPV and risk criteria. The order in which drugs are released in the pipeline is of great importance as with scheduling problems

    Discrete event simulation and virtual reality use in industry: new opportunities and future trends

    Get PDF
    This paper reviews the area of combined discrete event simulation (DES) and virtual reality (VR) use within industry. While establishing a state of the art for progress in this area, this paper makes the case for VR DES as the vehicle of choice for complex data analysis through interactive simulation models, highlighting both its advantages and current limitations. This paper reviews active research topics such as VR and DES real-time integration, communication protocols, system design considerations, model validation, and applications of VR and DES. While summarizing future research directions for this technology combination, the case is made for smart factory adoption of VR DES as a new platform for scenario testing and decision making. It is put that in order for VR DES to fully meet the visualization requirements of both Industry 4.0 and Industrial Internet visions of digital manufacturing, further research is required in the areas of lower latency image processing, DES delivery as a service, gesture recognition for VR DES interaction, and linkage of DES to real-time data streams and Big Data sets

    Approximate IPA: Trading Unbiasedness for Simplicity

    Full text link
    When Perturbation Analysis (PA) yields unbiased sensitivity estimators for expected-value performance functions in discrete event dynamic systems, it can be used for performance optimization of those functions. However, when PA is known to be unbiased, the complexity of its estimators often does not scale with the system's size. The purpose of this paper is to suggest an alternative approach to optimization which balances precision with computing efforts by trading off complicated, unbiased PA estimators for simple, biased approximate estimators. Furthermore, we provide guidelines for developing such estimators, that are largely based on the Stochastic Flow Modeling framework. We suggest that if the relative error (or bias) is not too large, then optimization algorithms such as stochastic approximation converge to a (local) minimum just like in the case where no approximation is used. We apply this approach to an example of balancing loss with buffer-cost in a finite-buffer queue, and prove a crucial upper bound on the relative error. This paper presents the initial study of the proposed approach, and we believe that if the idea gains traction then it may lead to a significant expansion of the scope of PA in optimization of discrete event systems.Comment: 8 pages, 8 figure

    Anticipating and Coordinating Voltage Control for Interconnected Power Systems

    Get PDF
    This paper deals with the application of an anticipating and coordinating feedback control scheme in order to mitigate the long-term voltage instability of multi-area power systems. Each local area is uniquely controlled by a control agent (CA) selecting control values based on model predictive control (MPC) and is possibly operated by an independent transmission system operator (TSO). Each MPC-based CA only knows a detailed local hybrid system model of its own area, employing reduced-order quasi steady-state (QSS) hybrid models of its neighboring areas and even simpler PV models for remote areas, to anticipate (and then optimize) the future behavior of its own area. Moreover, the neighboring CAs agree on communicating their planned future control input sequence in order to coordinate their own control actions. The feasibility of the proposed method for real-time applications is explained, and some practical implementation issues are also discussed. The performance of the method, using time-domain simulation of the Nordic32 test system, is compared with the uncoordinated decentralized MPC (no information exchange among CAs), demonstrating the improved behavior achieved by combining anticipation and coordination. The robustness of the control scheme against modeling uncertainties is also illustrated

    The role of learning on industrial simulation design and analysis

    Full text link
    The capability of modeling real-world system operations has turned simulation into an indispensable problemsolving methodology for business system design and analysis. Today, simulation supports decisions ranging from sourcing to operations to finance, starting at the strategic level and proceeding towards tactical and operational levels of decision-making. In such a dynamic setting, the practice of simulation goes beyond being a static problem-solving exercise and requires integration with learning. This article discusses the role of learning in simulation design and analysis motivated by the needs of industrial problems and describes how selected tools of statistical learning can be utilized for this purpose
    corecore