9 research outputs found

    Restart strategies for GRASP with path-relinking heuristics

    Get PDF
    Abstract. GRASP with path-relinking is a hybrid metaheuristic, or stochastic local search (Monte Carlo) method, for combinatorial optimization. A restart strategy in GRASP with path-relinking heuristics is a set of iterations {i1, i2, . . .} on which the heuristic is restarted from scratch using a new seed for the random number generator. Restart strategies have been shown to speed up stochastic local search algorithms. In this paper, we propose a new restart strategy for GRASP with path-relinking heuristics. We illustrate the speedup obtained with our restart strategy on GRASP with path-relinking heuristics for the maximum cut problem, the maximum weighted satisfiability problem, and the private virtual circuit routing problem

    A Tabu Search Heuristic for a Generalized Quadratic Assignment Problem

    Get PDF
    The generalized quadratic assignment problem (GQAP) is the task of assigning a set of facilities to a set of locations such that the sum of the assignment and transportation costs is minimized. The facilities may have different space requirements, and the locations may have varying space capacities. Also, multiple facilities may be assigned to each location such that space capacity is not exceeded. In this paper, an application of the GQAP is presented for assigning a set of machines to a set of locations on the plant floor. Construction algorithms and a simple tabu search heuristic are developed for the GQAP. A set of test problems available in the literature was used to evaluate the performances of the TS heuristic using different construction algorithms. The results show that the simple TS heuristic is effective for solving the GQAP

    Applying the big bang-big crunch metaheuristic to large-sized operational problems

    Get PDF
    In this study, we present an investigation of comparing the capability of a big bang-big crunch metaheuristic (BBBC) for managing operational problems including combinatorial optimization problems. The BBBC is a product of the evolution theory of the universe in physics and astronomy. Two main phases of BBBC are the big bang and the big crunch. The big bang phase involves the creation of a population of random initial solutions, while in the big crunch phase these solutions are shrunk into one elite solution exhibited by a mass center. This study looks into the BBBC’s effectiveness in assignment and scheduling problems. Where it was enhanced by incorporating an elite pool of diverse and high quality solutions; a simple descent heuristic as a local search method; implicit recombination; Euclidean distance; dynamic population size; and elitism strategies. Those strategies provide a balanced search of diverse and good quality population. The investigation is conducted by comparing the proposed BBBC with similar metaheuristics. The BBBC is tested on three different classes of combinatorial optimization problems; namely, quadratic assignment, bin packing, and job shop scheduling problems. Where the incorporated strategies have a greater impact on the BBBC's performance. Experiments showed that the BBBC maintains a good balance between diversity and quality which produces high-quality solutions, and outperforms other identical metaheuristics (e.g. swarm intelligence and evolutionary algorithms) reported in the literature

    Extração de regras de classificação de bases de dados por meio de procedimentos meta-heurísticos baseados em GRASP

    Get PDF
    Orientadora : Prof. Dr. Maria Teresinha Arns SteinerTese (doutorado) - Universidade Federal do Paraná, Setor de Tecnologia, Programa de Pós-Graduação em Métodos Numéricos em Engenharia. Defesa: Curitiba, 28/05/2014Inclui referênciasResumo: O processo de gestão do conhecimento nas mais diversas áreas – seja em indústrias, hospitais, escolas, bancos, dentre outros – exige constante atenção à multiplicidade de decisões a serem tomadas acerca de suas atividades. Para a tomada de decisões, faz-se necessária a utilização de técnicas científicas que lhes garantam a máxima acurácia. O presente trabalho faz o uso de ferramentas matemáticas que cumpram a finalidade de extração de conhecimento de base de dados. O objetivo é a proposição de uma nova meta-heurística, baseada no procedimento GRASP (Greedy Randomized Adaptive Search Procedure) como ferramenta de Data Mining (DM), no contexto do processo denominado Knowledge Discovery in Databases (KDD) para a tarefa de extração de regras de classificação em bases de dados. Assim, a metodologia aqui proposta possui três grandes blocos segundo o processo KDD: pré-processamento dos dados, no qual todos os atributos previsores são codificados de maneira a corresponder a uma ou mais coordenadas binárias; aplicação da meta-heurística propriamente dita para extração de regras de classificação; construção do classificador, momento em que as regras extraídas são ordenadas segundo critérios baseados no "fator de suporte" e na "confiança". A fim de validar esta proposta, a metodologia foi implementada e aplicada a sete bases de dados distintas, com um número variável de instâncias, de atributos e de classes. Os resultados obtidos apresentam elevada precisão preditiva, atingindo, por exemplo, 98% de acurácia para a base de dados zoo, 97% para a base íris e 94% para a base wine. Buscando ratificar os resultados obtidos, foram estabelecidas comparações entre a meta-heurística aqui proposta e os algoritmos BFTree, RepTree e J4.8, todos de árvore de decisão. A partir destas comparações, observa-se que em seis das sete bases analisadas a proposta implementada é superior, em termos de acurácia, aos algoritmos de árvore de decisão utilizados. Desta forma, conclui-se que a meta-heurística proposta atende os pré-requisitos para a tarefa de extração de conhecimento de base de dados.Abstract: The process of knowledge management in several areas – existing in industries, hospitals, schools, banks, among others - requires constant attention to the multiplicity of decisions to be made about their activities. In order to make decisions, it is necessary to use scientific techniques that will ensure their maximum accuracy. This study makes use of mathematical tools that meet the purpose of extracting knowledge from a database. The aim is to propose a new metaheuristic based on GRASP (Greedy Randomized Adaptive Search Procedure) procedure as a tool of Data Mining (DM) within the context of the process called Knowledge Discovery in Databases (KDD) for the task of extracting classification rules in databases. Thus, the methodology proposed herein has three large blocks according to the KDD process: data pre-processing, in which all predictor attributes are encoded to correspond to one or more binary coordinates; application of the metaheuristic itself for extracting classification rules; construction of the classifier, when the extracted rules are ordered in accordance with criteria based on "support factor" and "trust." In order to validate this proposal, the methodology has been implemented and applied to seven different databases, with a variable number of instances, attributes and classes. The results show high predictive accuracy, reaching, for example, 98% accuracy in the zoo database, 97% for the iris base and 94% for the wine base. Seeking to ratify the results, comparisons between the metaheuristic proposed herein and BFTree, RepTree and J4.8 decision tree algorithms were established. Based on these comparisons, it is observed that in six out of seven analyzed bases the implemented proposal is superior, in terms of accuracy, to the used decision tree algorithms. In this way, it is concluded that the meta-heuristic proposed meets the prerequisites for the task of extracting knowledge from a database

    Incorporating Memory and Learning Mechanisms Into Meta-RaPS

    Get PDF
    Due to the rapid increase of dimensions and complexity of real life problems, it has become more difficult to find optimal solutions using only exact mathematical methods. The need to find near-optimal solutions in an acceptable amount of time is a challenge when developing more sophisticated approaches. A proper answer to this challenge can be through the implementation of metaheuristic approaches. However, a more powerful answer might be reached by incorporating intelligence into metaheuristics. Meta-RaPS (Metaheuristic for Randomized Priority Search) is a metaheuristic that creates high quality solutions for discrete optimization problems. It is proposed that incorporating memory and learning mechanisms into Meta-RaPS, which is currently classified as a memoryless metaheuristic, can help the algorithm produce higher quality results. The proposed Meta-RaPS versions were created by taking different perspectives of learning. The first approach taken is Estimation of Distribution Algorithms (EDA), a stochastic learning technique that creates a probability distribution for each decision variable to generate new solutions. The second Meta-RaPS version was developed by utilizing a machine learning algorithm, Q Learning, which has been successfully applied to optimization problems whose output is a sequence of actions. In the third Meta-RaPS version, Path Relinking (PR) was implemented as a post-optimization method in which the new algorithm learns the good attributes by memorizing best solutions, and follows them to reach better solutions. The fourth proposed version of Meta-RaPS presented another form of learning with its ability to adaptively tune parameters. The efficiency of these approaches motivated us to redesign Meta-RaPS by removing the improvement phase and adding a more sophisticated Path Relinking method. The new Meta-RaPS could solve even the largest problems in much less time while keeping up the quality of its solutions. To evaluate their performance, all introduced versions were tested using the 0-1 Multidimensional Knapsack Problem (MKP). After comparing the proposed algorithms, Meta-RaPS PR and Meta-RaPS Q Learning appeared to be the algorithms with the best and worst performance, respectively. On the other hand, they could all show superior performance than other approaches to the 0-1 MKP in the literature

    Cross-layer modeling and optimization of next-generation internet networks

    Get PDF
    Scaling traditional telecommunication networks so that they are able to cope with the volume of future traffic demands and the stringent European Commission (EC) regulations on emissions would entail unaffordable investments. For this very reason, the design of an innovative ultra-high bandwidth power-efficient network architecture is nowadays a bold topic within the research community. So far, the independent evolution of network layers has resulted in isolated, and hence, far-from-optimal contributions, which have eventually led to the issues today's networks are facing such as inefficient energy strategy, limited network scalability and flexibility, reduced network manageability and increased overall network and customer services costs. Consequently, there is currently large consensus among network operators and the research community that cross-layer interaction and coordination is fundamental for the proper architectural design of next-generation Internet networks. This thesis actively contributes to the this goal by addressing the modeling, optimization and performance analysis of a set of potential technologies to be deployed in future cross-layer network architectures. By applying a transversal design approach (i.e., joint consideration of several network layers), we aim for achieving the maximization of the integration of the different network layers involved in each specific problem. To this end, Part I provides a comprehensive evaluation of optical transport networks (OTNs) based on layer 2 (L2) sub-wavelength switching (SWS) technologies, also taking into consideration the impact of physical layer impairments (PLIs) (L0 phenomena). Indeed, the recent and relevant advances in optical technologies have dramatically increased the impact that PLIs have on the optical signal quality, particularly in the context of SWS networks. Then, in Part II of the thesis, we present a set of case studies where it is shown that the application of operations research (OR) methodologies in the desing/planning stage of future cross-layer Internet network architectures leads to the successful joint optimization of key network performance indicators (KPIs) such as cost (i.e., CAPEX/OPEX), resources usage and energy consumption. OR can definitely play an important role by allowing network designers/architects to obtain good near-optimal solutions to real-sized problems within practical running times

    Module Identification for Biological Networks

    Get PDF
    Advances in high-throughput techniques have enabled researchers to produce large-scale data on molecular interactions. Systematic analysis of these large-scale interactome datasets based on their graph representations has the potential to yield a better understanding of the functional organization of the corresponding biological systems. One way to chart out the underlying cellular functional organization is to identify functional modules in these biological networks. However, there are several challenges of module identification for biological networks. First, different from social and computer networks, molecules work together with different interaction patterns; groups of molecules working together may have different sizes. Second, the degrees of nodes in biological networks obey the power-law distribution, which indicates that there exist many nodes with very low degrees and few nodes with high degrees. Third, molecular interaction data contain a large number of false positives and false negatives. In this dissertation, we propose computational algorithms to overcome those challenges. To identify functional modules based on interaction patterns, we develop efficient algorithms based on the concept of block modeling. We propose a subgradient Frank-Wolfe algorithm with path generation method to identify functional modules and recognize the functional organization of biological networks. Additionally, inspired by random walk on networks, we propose a novel two-hop random walk strategy to detect fine-size functional modules based on interaction patterns. To overcome the degree heterogeneity problem, we propose an algorithm to identify functional modules with the topological structure that is well separated from the rest of the network as well as densely connected. In order to minimize the impact of the existence of noisy interactions in biological networks, we propose methods to detect conserved functional modules for multiple biological networks by integrating the topological and orthology information across different biological networks. For every algorithm we developed, we compare each of them with the state-of-the-art algorithms on several biological networks. The comparison results on the known gold standard biological function annotations show that our methods can enhance the accuracy of predicting protein complexes and protein functions
    corecore