16,350 research outputs found

    Median evidential c-means algorithm and its application to community detection

    Get PDF
    Median clustering is of great value for partitioning relational data. In this paper, a new prototype-based clustering method, called Median Evidential C-Means (MECM), which is an extension of median c-means and median fuzzy c-means on the theoretical framework of belief functions is proposed. The median variant relaxes the restriction of a metric space embedding for the objects but constrains the prototypes to be in the original data set. Due to these properties, MECM could be applied to graph clustering problems. A community detection scheme for social networks based on MECM is investigated and the obtained credal partitions of graphs, which are more refined than crisp and fuzzy ones, enable us to have a better understanding of the graph structures. An initial prototype-selection scheme based on evidential semi-centrality is presented to avoid local premature convergence and an evidential modularity function is defined to choose the optimal number of communities. Finally, experiments in synthetic and real data sets illustrate the performance of MECM and show its difference to other methods

    Ranking Alternatives on the Basis of the Intensity of Dominance and Fuzzy Logic within MAUT

    Get PDF
    We introduce dominance measuring methods to derive a ranking of alternatives to deal with incomplete information in multi-criteria decision making problems on the basis of Multi-Attribute Utility Theory (MAUT). We consider the situation where the alternative performances are represented by uniformly distributed intervals, and there exists imprecision concerning the decision-makers¿ preferences, by means of classes of individual utility functions and imprecise weights represented by weight intervals or fuzzy weights, respectively. An additive multi-attribute utility model is used to evaluate the alternatives under consideration, which is considered a valid approach in most practical cases. The approaches we propose are based on the dominance values between pairs of alternatives that can be computed by linear programming, which are then transformed into dominance intensities from which a dominance intensity measure is derived. The methods proposed are compared with other existing dominance measuring methods and other methodologies by Monte Carlo simulation techniques. The performance is analyzed in terms of two measures of efficacy: hit ratio, the proportion of all cases in which the method selects the same best alternative as in the TRUE ranking, and the Rank-order correlation, which represents how similar the overall rank structures of alternatives are in the TRUE ranking and in the ranking derived from the method. The approaches are illustrated with an example consisting on the selection of intervention strategies to restore an aquatic ecosystem contaminated by radionuclides

    Dominance Measuring Method Performance under Incomplete Information about Weights.

    Get PDF
    In multi-attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative?s intensity of dominance, known as dominance measuring methods. Different dominancemeasuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we useMonte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions orweights represented by fuzzy numbers.Moreover, dominance measuringmethod performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one

    Multiobjective strategies for New Product Development in the pharmaceutical industry

    Get PDF
    New Product Development (NPD) constitutes a challenging problem in the pharmaceutical industry, due to the characteristics of the development pipeline. Formally, the NPD problem can be stated as follows: select a set of R&D projects from a pool of candidate projects in order to satisfy several criteria (economic profitability, time to market) while coping with the uncertain nature of the projects. More precisely, the recurrent key issues are to determine the projects to develop once target molecules have been identified, their order and the level of resources to assign. In this context, the proposed approach combines discrete event stochastic simulation (Monte Carlo approach) with multiobjective genetic algorithms (NSGAII type, Non-Sorted Genetic Algorithm II) to optimize the highly combinatorial portfolio management problem. In that context, Genetic Algorithms (GAs) are particularly attractive for treating this kind of problem, due to their ability to directly lead to the so-called Pareto front and to account for the combinatorial aspect. This work is illustrated with a study case involving nine interdependent new product candidates targeting three diseases. An analysis is performed for this test bench on the different pairs of criteria both for the bi- and tricriteria optimization: large portfolios cause resource queues and delays time to launch and are eliminated by the bi- and tricriteria optimization strategy. The optimization strategy is thus interesting to detect the sequence candidates. Time is an important criterion to consider simultaneously with NPV and risk criteria. The order in which drugs are released in the pipeline is of great importance as with scheduling problems

    Multiobjective strategies for New Product Development in the pharmaceutical industry

    Get PDF
    New Product Development (NPD) constitutes a challenging problem in the pharmaceutical industry, due to the characteristics of the development pipeline. Formally, the NPD problem can be stated as follows: select a set of R&D projects from a pool of candidate projects in order to satisfy several criteria (economic profitability, time to market) while coping with the uncertain nature of the projects. More precisely, the recurrent key issues are to determine the projects to develop once target molecules have been identified, their order and the level of resources to assign. In this context, the proposed approach combines discrete event stochastic simulation (Monte Carlo approach) with multiobjective genetic algorithms (NSGAII type, Non-Sorted Genetic Algorithm II) to optimize the highly combinatorial portfolio management problem. In that context, Genetic Algorithms (GAs) are particularly attractive for treating this kind of problem, due to their ability to directly lead to the so-called Pareto front and to account for the combinatorial aspect. This work is illustrated with a study case involving nine interdependent new product candidates targeting three diseases. An analysis is performed for this test bench on the different pairs of criteria both for the bi- and tricriteria optimization: large portfolios cause resource queues and delays time to launch and are eliminated by the bi- and tricriteria optimization strategy. The optimization strategy is thus interesting to detect the sequence candidates. Time is an important criterion to consider simultaneously with NPV and risk criteria. The order in which drugs are released in the pipeline is of great importance as with scheduling problems

    Stochastic multi-period multi-product multi-objective Aggregate Production Planning model in multi-echelon supply chain

    Get PDF
    In this paper a multi-period multi-product multi-objective aggregate production planning (APP) model is proposed for an uncertain multi-echelon supply chain considering financial risk, customer satisfaction, and human resource training. Three conflictive objective functions and several sets of real constraints are considered concurrently in the proposed APP model. Some parameters of the proposed model are assumed to be uncertain and handled through a two-stage stochastic programming (TSSP) approach. The proposed TSSP is solved using three multi-objective solution procedures, i.e., the goal attainment technique, the modified ε-constraint method, and STEM method. The whole procedure is applied in an automotive resin and oil supply chain as a real case study wherein the efficacy and applicability of the proposed approaches are illustrated in comparison with existing experimental production planning method
    corecore