1,654 research outputs found

    Ortalama-varyans portföy optimizasyonunda genetik algoritma uygulamaları üzerine bir literatür araştırması

    Get PDF
    Mean-variance portfolio optimization model, introduced by Markowitz, provides a fundamental answer to the problem of portfolio management. This model seeks an efficient frontier with the best trade-offs between two conflicting objectives of maximizing return and minimizing risk. The problem of determining an efficient frontier is known to be NP-hard. Due to the complexity of the problem, genetic algorithms have been widely employed by a growing number of researchers to solve this problem. In this study, a literature review of genetic algorithms implementations on mean-variance portfolio optimization is examined from the recent published literature. Main specifications of the problems studied and the specifications of suggested genetic algorithms have been summarized

    Designing labeled graph classifiers by exploiting the R\'enyi entropy of the dissimilarity representation

    Full text link
    Representing patterns as labeled graphs is becoming increasingly common in the broad field of computational intelligence. Accordingly, a wide repertoire of pattern recognition tools, such as classifiers and knowledge discovery procedures, are nowadays available and tested for various datasets of labeled graphs. However, the design of effective learning procedures operating in the space of labeled graphs is still a challenging problem, especially from the computational complexity viewpoint. In this paper, we present a major improvement of a general-purpose classifier for graphs, which is conceived on an interplay between dissimilarity representation, clustering, information-theoretic techniques, and evolutionary optimization algorithms. The improvement focuses on a specific key subroutine devised to compress the input data. We prove different theorems which are fundamental to the setting of the parameters controlling such a compression operation. We demonstrate the effectiveness of the resulting classifier by benchmarking the developed variants on well-known datasets of labeled graphs, considering as distinct performance indicators the classification accuracy, computing time, and parsimony in terms of structural complexity of the synthesized classification models. The results show state-of-the-art standards in terms of test set accuracy and a considerable speed-up for what concerns the computing time.Comment: Revised versio

    Attribute selection via multi-objective evolutionary computation applied to multi-skill contact center data classification

    Get PDF
    Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We propose the application of the multi-objective evolutionary algorithm ENORA to the task of feature selection for multi-class classification of data extracted from an integrated multi-channel multi-skill contact center, which include technical, service and central data for each session. Additionally, we propose a methodology to integrate feature selection for classification, model evaluation, and decision making to choose the most satisfactory model according to a "a posteriori" process in a multi-objective context. We check out our results by comparing the performance and the classification rate against the well-known multi-objective evolutionary algorithm NSGA-II. Finally, the best obtained solution is validated by a data expert’s semantic interpretation of the classifier

    Induction of accurate and interpretable fuzzy rules from preliminary crisp representation

    Get PDF
    This paper proposes a novel approach for building transparent knowledge-based systems by generating accurate and interpretable fuzzy rules. The learning mechanism reported here induces fuzzy rules via making use of only predefined fuzzy labels that reflect prescribed notations and domain expertise, thereby ensuring transparency in the knowledge model adopted for problem solving. It works by mapping every coarsely learned crisp production rule in the knowledge base onto a set of potentially useful fuzzy rules, which serves as an initial step towards an intuitive technique for similarity-based rule generalisation. This is followed by a procedure that locally selects a compact subset of the emerging fuzzy rules, so that the resulting subset collectively generalises the underlying original crisp rule. The outcome of this local procedure forms the input to a global genetic search process, which seeks for a trade-off between accuracy and complexity of the eventually induced fuzzy rule base while maintaining transparency. Systematic experimental results are provided to demonstrate that the induced fuzzy knowledge base is of high performance and interpretabilitypublishersversionPeer reviewe
    corecore