8 research outputs found

    A Maximum Satisfiability Based Approach to Bi-Objective Boolean Optimization

    Get PDF
    Many real-world problem settings give rise to NP-hard combinatorial optimization problems. This results in a need for non-trivial algorithmic approaches for finding optimal solutions to such problems. Many such approaches—ranging from probabilistic and meta-heuristic algorithms to declarative programming—have been presented for optimization problems with a single objective. Less work has been done on approaches for optimization problems with multiple objectives. We present BiOptSat, an exact declarative approach for finding so-called Pareto-optimal solutions to bi-objective optimization problems. A bi-objective optimization problem arises for example when learning interpretable classifiers and the size, as well as the classification error of the classifier should be taken into account as objectives. Using propositional logic as a declarative programming language, we seek to extend the progress and success in maximum satisfiability (MaxSAT) solving to two objectives. BiOptSat can be viewed as an instantiation of the lexicographic method and makes use of a single SAT solver that is preserved throughout the entire search procedure. It allows for solving three tasks for bi-objective optimization: finding a single Pareto-optimal solution, finding one representative solution for each Pareto point, and enumerating all Pareto-optimal solutions. We provide an open-source implementation of five variants of BiOptSat, building on different algorithms proposed for MaxSAT. Additionally, we empirically evaluate these five variants, comparing their runtime performance to that of three key competing algorithmic approaches. The empirical comparison in the contexts of learning interpretable decision rules and bi-objective set covering shows practical benefits of our approach. Furthermore, for the best-performing variant of BiOptSat, we study the effects of proposed refinements to determine their effectiveness

    Reconsideration and extension of Cartesian genetic programming

    Get PDF
    This dissertation aims on analyzing fundamental concepts and dogmas of a graph-based genetic programming approach called Cartesian Genetic Programming (CGP) and introduces advanced genetic operators for CGP. The results of the experiments presented in this thesis lead to more knowledge about the algorithmic use of CGP and its underlying working mechanisms. CGP has been mostly used with a parametrization pattern, which has been prematurely generalized as the most efficient pattern for standard CGP and its variants. Several parametrization patterns are evaluated with more detailed and comprehensive experiments by using meta-optimization. This thesis also presents a first runtime analysis of CGP. The time complexity of a simple (1+1)-CGP algorithm is analyzed with a simple mathematical problem and a simple Boolean function problem. In the subfield of genetic operators for CGP, new recombination and mutation techniques that work on a phenotypic level are presented. The effectiveness of these operators is demonstrated on a widespread set of popular benchmark problems. Especially the role of recombination can be seen as a big open question in the field of CGP, since the lack of an effective recombination operator limits CGP to mutation-only use. Phenotypic exploration analysis is used to analyze the effects caused by the presented operators. This type of analysis also leads to new insights into the search behavior of CGP in continuous and discrete fitness spaces. Overall, the outcome of this thesis leads to a reconsideration of how CGP is effectively used and extends its adaption from Darwin's and Lamarck's theories of biological evolution

    A computational study of the Firefighter Problem on graphs

    Get PDF
    Orientadores: Cid Carvalho de Souza, Pedro Jussieu de RezendeDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O Problema do Brigadista em Grafos (FFP do inglês The Firefighter Problem é um modelo determinístico e em tempo discreto para simular a propagação e contenção de incêndios em grafos. Ele pode ser descrito da seguinte forma. Na entrada, é dado um inteiro D representando a quantidade de brigadistas disponíveis, um grafo não direcionado e não ponderado G = (V,E) e um subconjunto de vértices B de V, os focos de incêndio. Então, inicia-se nos elementos de B um processo iterativo de propagação e contenção de fogo através dos vértices de G, em rodadas discretas, o qual termina quando não existem mais vértices que possam ser queimados, ou seja, quando o fogo está contido. O objetivo ao resolver o FFP é maximizar o número de vértices não queimados quando o fogo é contido, com a restrição de que no máximo D vértices podem ser protegidos contra o fogo por rodada. Aplicações práticas do FFP, além da obtenção de estratégias para minimização de danos causados por incêndios, podem ser encontradas em áreas como controle de doenças e segurança em redes. O FFP é NP-difícil e métodos heurísticos para lidar com o problema foram relatados previamente na literatura. Nessa dissertação, primeiramente, apresentamos melhorias feitas na primeira formulação PLI proposta para o FFP através de técnicas de preprocessamento e agregação de restrições. Em seguida, descrevemos novas heurísticas gulosas e introduzimos uma nova matheurística para o FFP, uma abordagem que se baseia na interoperação entre meta-heurísticas e programação matemática. Experimentos foram conduzidos em um benchmark público tanto para configuração de parâmetros quanto para análise de desempenho, através de comparação dos resultados obtidos com aqueles publicados anteriormente. Com respeito às modificações no modelo PLI, um speedup de aproximadamente 2 em média foi alcançado. Observamos que as modificações feitas podem levar a geração de soluções infactíveis, mas conseguimos demonstrar que é possível tornar tais soluções factíveis em tempo polinomial. Em referência às heurísticas, essas foram executadas seguindo uma metodologia para construir uma solução na qual escolhas gulosas aleatorizadas são realizadas para selecionar quais vértices serão defendidos, de acordo com conceitos introduzidos na meta-heurística GRASP. Comparando essas heurísticas com as que foram propostas por trabalhos anteriores, observamos que duas das nossas estão entre as cinco melhores na maioria dos casos. Com relação à matheurística, através de uma análise estatística rigorosa, verificamos que existe diferença estatisticamente significativa entre nossa estratégia e as demais, ao mesmo tempo que nossa matheurística conseguiu resultados melhores na maioria das instânciasAbstract: The firefighter problem (FFP) is a deterministic discrete-time model for the spread and containment of fire on a graph. Such problem is described as follows. As its inputs, there is an integer D representing the number of available firefighters, an undirected and unweighted graph G=(V, E) and a subset of vertices B of V, the fire outbreaks. Then, an iterative process of fire propagation and containment through the vertices of G is started at the ones from B. This process ends when there are no more vertices to be burnt, that is, the fire is contained. The goal when solving the FFP is to maximize the number of vertices that are not burned when the fire is contained, with the constraint that at most D vertices can be protected against the fire per iteration. Practical applications of the FFP, besides obtaining strategies to minimize the damage caused by fire, can be found in areas such as disease control and network security. The FFP is NP-hard and heuristic methods to tackle the problem were proposed earlier in the literature. In this dissertation, firstly we present modifications made on the first ILP model proposed to the FFP through techniques of preprocessing and constraint aggregation. Moreover, we describe new greedy heuristics and also we introduce a novel matheuristic to the FFP, an approach based on the interoperation between metaheuristics and mathematical programming. A series of computational experiments were conducted on a public benchmark both for parameter tuning and to compare our results with those obtained previously. In respect to the modifications on the ILP model, a speedup of 2 in average was obtained. While constraint aggregation can lead to infeasible solutions, we prove that the latter can be converted to feasible ones in linear time. Regarding the heuristics, they were executed following a methodology to construct a solution in which greedy randomized choices are made to select which vertices should be defended, according to concepts introduced by the GRASP metaheuristic. Comparing these heuristics with the ones proposed by previous works, we observe that two of ours are between the five best ones in general. In relation to the matheuristic, through rigorous statistical analysis, we were able to verify that there is a statistically significant difference between our strategy and the remaining ones, while our matheuristic had better results on the majority of the instancesMestradoCiência da ComputaçãoMestre em Ciência da Computação133728/2016-1CNP

    Evolutionary design of deep neural networks

    Get PDF
    Mención Internacional en el título de doctorFor three decades, neuroevolution has applied evolutionary computation to the optimization of the topology of artificial neural networks, with most works focusing on very simple architectures. However, times have changed, and nowadays convolutional neural networks are the industry and academia standard for solving a variety of problems, many of which remained unsolved before the discovery of this kind of networks. Convolutional neural networks involve complex topologies, and the manual design of these topologies for solving a problem at hand is expensive and inefficient. In this thesis, our aim is to use neuroevolution in order to evolve the architecture of convolutional neural networks. To do so, we have decided to try two different techniques: genetic algorithms and grammatical evolution. We have implemented a niching scheme for preserving the genetic diversity, in order to ease the construction of ensembles of neural networks. These techniques have been validated against the MNIST database for handwritten digit recognition, achieving a test error rate of 0.28%, and the OPPORTUNITY data set for human activity recognition, attaining an F1 score of 0.9275. Both results have proven very competitive when compared with the state of the art. Also, in all cases, ensembles have proven to perform better than individual models. Later, the topologies learned for MNIST were tested on EMNIST, a database recently introduced in 2017, which includes more samples and a set of letters for character recognition. Results have shown that the topologies optimized for MNIST perform well on EMNIST, proving that architectures can be reused across domains with similar characteristics. In summary, neuroevolution is an effective approach for automatically designing topologies for convolutional neural networks. However, it still remains as an unexplored field due to hardware limitations. Current advances, however, should constitute the fuel that empowers the emergence of this field, and further research should start as of today.This Ph.D. dissertation has been partially supported by the Spanish Ministry of Education, Culture and Sports under FPU fellowship with identifier FPU13/03917. This research stay has been partially co-funded by the Spanish Ministry of Education, Culture and Sports under FPU short stay grant with identifier EST15/00260.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: María Araceli Sanchís de Miguel.- Secretario: Francisco Javier Segovia Pérez.- Vocal: Simon Luca

    Fahrplanbasiertes Energiemanagement in Smart Grids

    Get PDF
    Die Zunahme dezentraler, volatiler Stromerzeugung im Rahmen der Energiewende führt schon heute zu Engpässen in Stromnetzen. Eine Lösung dieser Probleme verspricht die informationstechnische Vernetzung und Koordination der Erzeuger und Verbraucher in Smart Grids. Diese Arbeit präsentiert einen Energiemanagement-Ansatz, der basierend auf Leistungsprognosen und Flexibilitäten der Akteure spezifische, aggregierte Leistungsprofile approximiert. Hierbei werden Netzrestriktionen berücksichtigt

    Historia, evolución y perspectivas de futuro en la utilización de técnicas de simulación en la gestión portuaria: aplicaciones en el análisis de operaciones, estrategia y planificación portuaria

    Get PDF
    Programa Oficial de Doutoramento en Análise Económica e Estratexia Empresarial. 5033V0[Resumen] Las técnicas de simulación, tal y como hoy las conocemos, comenzaron a mediados del siglo XX; primero con la aparición del primer computador y el desarrollo del método Monte Carlo, y más tarde con el desarrollo del primer simulador de propósito específico conocido como GPS y desarrollado por Geoffrey Gordon en IBM y la publicación del primer texto completo dedicado a esta materia y llamado the Art of Simulation (K.D. Tocher, 1963). Estás técnicas han evolucionado de una manera extraordinaria y hoy en día están plenamente implementadas en diversos campos de actividad. Las instalaciones portuarias no han escapado de esta tendencia, especialmente las dedicadas al tráfico de contenedores. Efectivamente, las características intrínsecas de este sector económico, le hacen un candidato idóneo para la implementación de modelos de simulación con propósitos y alcances muy diversos. No existe, sin embargo y hasta lo que conocemos, un trabajo científico que compile y analice pormenorizadamente tanto la historia como la evolución de simulación en ambientes portuarios, ayudando a clasificar los mismos y determinar cómo estos pueden ayudar en el análisis económico de estas instalaciones y en la formulación de las oportunas estrategias empresariales. Este es el objetivo último de la presente tesis doctoral.[Resumo] As técnicas de simulación, tal e como hoxe as coñecemos, comezaron a mediados do século XX; primeiro coa aparición do computador e o desenvolvemento do método Monte Carlo e máis tarde co desenvolvemento do primeiro simulador de propósito específico coñecido como GPS e desenvolvido por Geoffrey Gordon en IBM e a publicación do primeiro texto completo dedicado a este tema chamado “A Arte da Simulación” (K.D. Tocher, 1963). Estas técnicas evolucionaron dun xeito extraordinario e hoxe en día están plenamente implementadas en diversos campos de actividade. As instalacións portuarias non escaparon desta tendencia, especialmente as dedicadas ao tráfico de contenedores. Efectivamente, as características intrínsecas deste sector económico, fanlle un candidato idóneo para a implementación de modelos de simulación con propósitos e alcances moi variados. Con todo, e ata o que coñecemos, non existe un traballo científico que compila e analiza de forma detallada tanto a historia como a evolución da simulación en estes ambientes portuarios, clasificando os mesmos e determinando como estes poden axudar na análise económica destas instalacións e na formulación das oportunas estratexias empresariais. Este é o último obxectivo da presente tese doutoral.[Abstract] Simulation, to the extend that we understand it nowadays, began in the middle of the 20th century; first with the appearance of the computer and the development of the Monte Carlo method, and later with the development of the first specific purpose simulator known as GPS developed by Geoffrey Gordon in IBM. This author published the first full text devoted to this subject “The Art of Simulation” in 1963. These techniques have evolved in an extraordinary way and nowadays they are fully implemented in different fields of activity. Port facilities have not escaped this trend, especially those dedicated to container traffic. Indeed, the intrinsic characteristics of this economic sector, make it a suitable candidate for the implementation of simulation with very different purposes and scope. However, to the best of our knowelegde, there is not a scientific work that compiles and analyzes in detail both, the history and the evolution of simulation in port environments, contributing to classify them and determine how they can help in the economic analysis of these facilities and in the formulation of different business strategies. This is the ultimate goal of this doctoral thesis
    corecore