21,167 research outputs found

    CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features

    Full text link
    In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed operator takes into account the localization and dispersion features of the best individuals of the population with the objective that these features would be inherited by the offspring. Our aim is the optimization of the balance between exploration and exploitation in the search process. In order to test the efficiency and robustness of this crossover, we have used a set of functions to be optimized with regard to different criteria, such as, multimodality, separability, regularity and epistasis. With this set of functions we can extract conclusions in function of the problem at hand. We analyze the results using ANOVA and multiple comparison statistical tests. As an example of how our crossover can be used to solve artificial intelligence problems, we have applied the proposed model to the problem of obtaining the weight of each network in a ensemble of neural networks. The results obtained are above the performance of standard methods

    A framework for the selection of the right nuclear power plant

    Get PDF
    Civil nuclear reactors are used for the production of electrical energy. In the nuclear industry vendors propose several nuclear reactor designs with a size from 35–45 MWe up to 1600–1700 MWe. The choice of the right design is a multidimensional problem since a utility has to include not only financial factors as levelised cost of electricity (LCOE) and internal rate of return (IRR), but also the so called “external factors” like the required spinning reserve, the impact on local industry and the social acceptability. Therefore it is necessary to balance advantages and disadvantages of each design during the entire life cycle of the plant, usually 40–60 years. In the scientific literature there are several techniques for solving this multidimensional problem. Unfortunately it does not seem possible to apply these methodologies as they are, since the problem is too complex and it is difficult to provide consistent and trustworthy expert judgments. This paper fills the gap, proposing a two-step framework to choosing the best nuclear reactor at the pre-feasibility study phase. The paper shows in detail how to use the methodology, comparing the choice of a small-medium reactor (SMR) with a large reactor (LR), characterised, according to the International Atomic Energy Agency (2006), by an electrical output respectively lower and higher than 700 MWe

    Modelling fish habitat preference with a genetic algorithm-optimized Takagi-Sugeno model based on pairwise comparisons

    Get PDF
    Species-environment relationships are used for evaluating the current status of target species and the potential impact of natural or anthropogenic changes of their habitat. Recent researches reported that the results are strongly affected by the quality of a data set used. The present study attempted to apply pairwise comparisons to modelling fish habitat preference with Takagi-Sugeno-type fuzzy habitat preference models (FHPMs) optimized by a genetic algorithm (GA). The model was compared with the result obtained from the FHPM optimized based on mean squared error (MSE). Three independent data sets were used for training and testing of these models. The FHPMs based on pairwise comparison produced variable habitat preference curves from 20 different initial conditions in the GA. This could be partially ascribed to the optimization process and the regulations assigned. This case study demonstrates applicability and limitations of pairwise comparison-based optimization in an FHPM. Future research should focus on a more flexible learning process to make a good use of the advantages of pairwise comparisons

    A review of methods for capacity identification in Choquet integral based multi-attribute utility theory: Applications of the Kappalab R package

    Get PDF
    The application of multi-attribute utility theory whose aggregation process is based on the Choquet integral requires the prior identification of a capacity. The main approaches to capacity identification proposed in the literature are reviewed and their advantages and inconveniences are discussed. All the reviewed methods have been implemented within the Kappalab R package. Their application is illustrated on a detailed example.Multi-criteria decision aiding; Multi-attribute utility theory; Choquet integral; Free software

    Ortalama-varyans portföy optimizasyonunda genetik algoritma uygulamaları ĂŒzerine bir literatĂŒr araƟtırması

    Get PDF
    Mean-variance portfolio optimization model, introduced by Markowitz, provides a fundamental answer to the problem of portfolio management. This model seeks an efficient frontier with the best trade-offs between two conflicting objectives of maximizing return and minimizing risk. The problem of determining an efficient frontier is known to be NP-hard. Due to the complexity of the problem, genetic algorithms have been widely employed by a growing number of researchers to solve this problem. In this study, a literature review of genetic algorithms implementations on mean-variance portfolio optimization is examined from the recent published literature. Main specifications of the problems studied and the specifications of suggested genetic algorithms have been summarized

    AN ALTERNATIVE APPROACH TO THE EVALUATION OF GOAL HIERARCHIES AMONG FARMERS

    Get PDF
    Results of a study of goal orderings of Saskatchewan farmers who participate in the province's FARMLAB Program are presented. We use the method of fuzzy pair-wise comparisons which allows the respondent to indicate a degree of preference between two alternative goal statements, thereby providing more information than in the binary case. From survey data ratio-scale scores are constructed for eight goal statements, and these are regressed on a set of farm enterprise and household characteristics and a psychological locus-of-control (or I-E) score. The empirical results indicate that goodness-of-fit measures are better than those obtained by other researchers, perhaps because a psychological measure (I-E score) is included as an explanatory variable for goal orderings.Farm Management,

    Input variable selection in time-critical knowledge integration applications: A review, analysis, and recommendation paper

    Get PDF
    This is the post-print version of the final paper published in Advanced Engineering Informatics. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2013 Elsevier B.V.The purpose of this research is twofold: first, to undertake a thorough appraisal of existing Input Variable Selection (IVS) methods within the context of time-critical and computation resource-limited dimensionality reduction problems; second, to demonstrate improvements to, and the application of, a recently proposed time-critical sensitivity analysis method called EventTracker to an environment science industrial use-case, i.e., sub-surface drilling. Producing time-critical accurate knowledge about the state of a system (effect) under computational and data acquisition (cause) constraints is a major challenge, especially if the knowledge required is critical to the system operation where the safety of operators or integrity of costly equipment is at stake. Understanding and interpreting, a chain of interrelated events, predicted or unpredicted, that may or may not result in a specific state of the system, is the core challenge of this research. The main objective is then to identify which set of input data signals has a significant impact on the set of system state information (i.e. output). Through a cause-effect analysis technique, the proposed technique supports the filtering of unsolicited data that can otherwise clog up the communication and computational capabilities of a standard supervisory control and data acquisition system. The paper analyzes the performance of input variable selection techniques from a series of perspectives. It then expands the categorization and assessment of sensitivity analysis methods in a structured framework that takes into account the relationship between inputs and outputs, the nature of their time series, and the computational effort required. The outcome of this analysis is that established methods have a limited suitability for use by time-critical variable selection applications. By way of a geological drilling monitoring scenario, the suitability of the proposed EventTracker Sensitivity Analysis method for use in high volume and time critical input variable selection problems is demonstrated.E

    Measuring Technical Efficiency of Dairy Farms with Imprecise Data: A Fuzzy Data Envelopment Analysis Approach

    Get PDF
    This article integrates fuzzy set theory in Data Envelopment Analysis (DEA) framework to compute technical efficiency scores when input and output data are imprecise. The underlying assumption in convectional DEA is that inputs and outputs data are measured with precision. However, production agriculture takes place in an uncertain environment and, in some situations, input and output data may be imprecise. We present an approach of measuring efficiency when data is known to lie within specified intervals and empirically illustrate this approach using a group of 34 dairy producers in Pennsylvania. Compared to the convectional DEA scores that are point estimates, the computed fuzzy efficiency scores allow the decision maker to trace the performance of a decision-making unit at different possibility levels.fuzzy set theory, Data Envelopment Analysis, membership function, α-cut level, technical efficiency, Farm Management, Production Economics, Productivity Analysis, Research Methods/ Statistical Methods, Risk and Uncertainty, D24, Q12, C02, C44, C61,

    Dominance Measuring Method Performance under Incomplete Information about Weights.

    Get PDF
    In multi-attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative?s intensity of dominance, known as dominance measuring methods. Different dominancemeasuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we useMonte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions orweights represented by fuzzy numbers.Moreover, dominance measuringmethod performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one
    • 

    corecore