4 research outputs found
Improving Scalability of Evolutionary Robotics with Reformulation
Creating systems that can operate autonomously in complex environments is a challenge for contemporary engineering techniques. Automatic design methods offer a promising alternative, but so far they have not been able to produce agents that outperform manual designs. One such method is evolutionary robotics. It has been shown to be a robust and versatile tool for designing robots to perform simple tasks, but more challenging tasks at present remain out of reach of the method.
In this thesis I discuss and attack some problems underlying the scalability issues associated with the method. I present a new technique for evolving modular networks. I show that the performance of modularity-biased evolution depends heavily on the morphology of the robot’s body and present a new method for co-evolving morphology and modular control.
To be able to reason about the new technique I develop reformulation framework: a general way to describe and reason about metaoptimization approaches. Within this framework I describe a new heuristic for developing metaoptimization approaches that is based on the technique for co-evolving morphology and modularity. I validate the framework by applying it to a practical task of zero-g autonomous assembly of structures with a fleet of small robots.
Although this work focuses on the evolutionary robotics, methods and approaches developed within it can be applied to optimization problems in any domain
Machine learning in compilers
Tuning a compiler so that it produces optimised code is a difficult task because modern processors
are complicated; they have a large number of components operating in parallel and each
is sensitive to the behaviour of the others. Building analytical models on which optimisation
heuristics can be based has become harder as processor complexity increased and this trend is
bound to continue as the world moves towards further heterogeneous parallelism. Compiler
writers need to spend months to get a heuristic right for any particular architecture and these
days compilers often support a wide range of disparate devices. Whenever a new processor
comes out, even if derived from a previous one, the compiler’s heuristics will need to be retuned
for it. This is, typically, too much effort and so, in fact, most compilers are out of date.
Machine learning has been shown to help; by running example programs, compiled in
different ways, and observing how those ways effect program run-time, automatic machine
learning tools can predict good settings with which to compile new, as yet unseen programs.
The field is nascent, but has demonstrated significant results already and promises a day when
compilers will be tuned for new hardware without the need for months of compiler experts’
time. Many hurdles still remain, however, and while experts no longer have to worry about
the details of heuristic parameters, they must spend their time on the details of the machine
learning process instead to get the full benefits of the approach.
This thesis aims to remove some of the aspects of machine learning based compilers for
which human experts are still required, paving the way for a completely automatic, retuning
compiler.
First, we tackle the most conspicuous area of human involvement; feature generation. In all
previous machine learning works for compilers, the features, which describe the important aspects
of each example to the machine learning tools, must be constructed by an expert. Should
that expert choose features poorly, they will miss crucial information without which the machine
learning algorithm can never excel. We show that not only can we automatically derive
good features, but that these features out perform those of human experts. We demonstrate our
approach on loop unrolling, and find we do better than previous work, obtaining XXX% of the
available performance, more than the XXX% of previous state of the art.
Next, we demonstrate a new method to efficiently capture the raw data needed for machine
learning tasks. The iterative compilation on which machine learning in compilers depends is
typically time consuming, often requiring months of compute time. The underlying processes
are also noisy, so that most prior works fall into two categories; those which attempt to gather
clean data by executing a large number of times and those which ignore the statistical validity
of their data to keep experiment times feasible. Our approach, on the other hand guarantees
clean data while adapting to the experiment at hand, needing an order of magnitude less work
that prior techniques