3 research outputs found

    Experiments in Genetic Divergence for Emergent Systems

    Get PDF
    Emergent software systems take a step towards tackling the ever-increasing complexity of modern software, by having systems self-assemble from a library of building blocks, and then continually re-assemble themselves from alternative building blocks to learn which compositions of behaviour work best in each deployment environment. One of the key challenges in emergent systems is populating the library of building blocks, and particularly a set of alternative implementations of particular building blocks, which form the runtime search space of optimal behaviour. We present initial work in using a fusion of genetic improvement and genetic synthesis to automatically populate a divergent set of implementations of the same functionality, allowing emergent systems to explore new behavioural alternatives without human input. Our early results indicate this approach is able to successfully yield useful divergent implementations of building blocks which are more suited than any existing alternative for particular operating conditions

    Efficient execution of Java programs on GPU

    Get PDF
    Dissertação de mestrado em Informatics EngineeringWith the overwhelming increase of demand of computational power made by fields as Big Data, Deep Machine learning and Image processing the Graphics Processing Units (GPUs) has been seen as a valuable tool to compute the main workload involved. Nonetheless, these solutions have limited support for object-oriented languages that often require manual memory handling which is an obstacle to bringing together the large community of object oriented programmers and the high-performance computing field. In this master thesis, different memory optimizations and their impacts were studied in a GPU Java context using Aparapi. These include solutions for different identifiable bottlenecks of commonly used kernels exploiting its full capabilities by studying the GPU hardware and current techniques available. These results were set against common used C/OpenCL benchmarks and respective optimizations proving, that high-level languages can be a solution to high-performance software demand.Com o aumento de poder computacional requisitado por campos como Big Data, Deep Machine Learning e Processamento de Imagens, as unidades de processamento gráfico (GPUs) tem sido vistas como uma ferramenta valiosa para executar a principal carga de trabalho envolvida. No entanto, esta solução tem suporte limitado para linguagens orientadas a objetos. Frequentemente estas requerem manipulação manual de memória, o que é um obstáculo para reunir a grande comunidade de programadores orientados a objetos e o campo da computação de alto desempenho. Nesta dissertação de mestrado, diferentes otimizações de memória e os seus impactos foram estudados utilizando Aparapi. As otimizações estudadas pretendem solucionar bottle-necks identificáveis em kernels frequentemente utilizados. Os resultados obtidos foram comparados com benchmarks C / OpenCL populares e as suas respectivas otimizações, provando que as linguagens de alto nível podem ser uma solução para programas que requerem computação de alto desempenho

    Neural Network Guided Transfer Learning for Genetic Programming

    Get PDF
    Programming-by-Example, and code synthesis in general, is a field with many different sub-fields, involving many forms of machine learning and computational logic. With advantages and disadvantages to each, attempts to build effective hybrid solutions would seem to be a promising direction. Transfer Learning (TL) provides a good framework for this, as it allows one of the classic code synthesis techniques, Genetic Programming, to be augmented by past success, to target a particular code synthesis system to the problem domain it is facing. TL allows one type of machine learning algorithm, in this thesis a neural network, to support the core GP process, and combine the strengths of both. This thesis explores the concept of hybrid code synthesis approaches, and then brings the identified strongest elements of each approach together into a single neural network driven Transfer Learning system for Genetic Programming. The TL system operates autonomously, without any human intervention required after the problem set (in example only format) is presented to the system. The thesis first studies how to structure a training corpus for a neural network, across two different experiments, exploring how the constraints placed on a corpus can result in superior training. After this, it studies how GP processes can be guided, to ensure that a hypothetical NN guide would be useful if it could be created and how it can best assist the GP. Finally, it combines the previous studies together into the full end-to-end TL system and tests its performance across two separate problem domain
    corecore