4 research outputs found
Genetic Improvement of Software (Dagstuhl Seminar 18052)
We document the program and the immediate outcomes of Dagstuhl Seminar 18052 “Genetic
Improvement of Software”. The seminar brought together researchers in Genetic Improvement
(GI) and related areas of software engineering to investigate what is achievable with current technology and the current impediments to progress and how GI can affect the software development
process. Several talks covered the state-of-the-art and work in progress. Seven emergent topics
have been identified ranging from the nature of the GI search space through benchmarking and
practical applications. The seminar has already resulted in multiple research paper publications.
Four by participants of the seminar will be presented at the GI workshop co-located with the
top conference in software engineering - ICSE. Several researchers started new collaborations,
results of which we hope to see in the near future
Genetic Improvement of Software: From Program Landscapes to the Automatic Improvement of a Live System
In today’s technology driven society, software is becoming increasingly important in more
areas of our lives. The domain of software extends beyond the obvious domain of computers,
tablets, and mobile phones. Smart devices and the internet-of-things have inspired the integra-
tion of digital and computational technology into objects that some of us would never have
guessed could be possible or even necessary. Fridges and freezers connected to social media
sites, a toaster activated with a mobile phone, physical buttons for shopping, and verbally
asking smart speakers to order a meal to be delivered. This is the world we live in and it is an
exciting time for software engineers and computer scientists. The sheer volume of code that is
currently in use has long since outgrown beyond the point of any hope for proper manual
maintenance. The rate of which mobile application stores such as Google’s and Apple’s have
expanded is astounding.
The research presented here aims to shed a light on an emerging field of research, called
Genetic Improvement ( GI ) of software. It is a methodology to change program code to improve
existing software. This thesis details a framework for GI that is then applied to explore fitness
landscape of bug fixing Python software, reduce execution time in a C ++ program, and
integrated into a live system.
We show that software is generally not fragile and although fitness landscapes for GI are
flat they are not impossible to search in. This conclusion applies equally to bug fixing in small
programs as well as execution time improvements. The framework’s application is shown to
be transportable between programming languages with minimal effort. Additionally, it can be
easily integrated into a system that runs a live web service.
The work within this thesis was funded by EPSRC grant EP/J017515/1 through the DAASE
project
Programmer-transparent efficient parallelism with skeletons
Parallel and heterogeneous systems are ubiquitous. Unfortunately, both require significant complexity at the software level to the detriment of programmer productivity. To
produce correct and efficient code programmers not only have to manage synchronisation and communication but also be aware of low-level hardware details. It is foresee able that the problem is becoming worse because systems are increasingly parallel and
heterogeneous.
Building on earlier work, this thesis further investigates the contribution which
algorithmic skeletons can make towards solving this problem. Skeletons are high-level
abstractions for typical parallel computations. They hide low-level hardware details
from programmers and, in addition, encode information about the computations that
they implement, which runtime systems and library developers can use for automatic
optimisations. We present two novel case studies in this respect.
First, we provide scheduling flexibility on heterogeneous CPU + GPU systems in
a programmer transparent way similar to the freedom OS schedulers have on CPUs.
Thanks to the high-level nature of skeletons we automatically switch between CPU and
GPU implementations of kernels and use semantic information encoded in skeletons to
find execution time points at which switches can occur. In more detail, kernel iteration
spaces are processed in slices and migration is considered on a slice-by-slice basis. We
show that slice sizes choices that introduce negligible overheads can be learned by predictive models. We show that in a simple deployment scenario mid-kernel migration
achieves speedups of up to 1.30x and 1.08x on average. Our mechanism introduces
negligible overheads of 2.34% if a kernel does not actually migrate.
Second, we propose skeletons to simplify the programming of parallel hard real-time systems. We combine information encoded in task farms with real-time systems
user code analysis to automatically choose thread counts and an optimisation parameter
related to farm internal communication. Both parameters are chosen so that real-time
deadlines are met with minimum resource usage. We show that our approach achieves
1.22x speedup over unoptimised code, selects the best parameter settings in 83% of
cases, and never chooses parameters that cause deadline misses
Dense Visual Simultaneous Localisation and Mapping in Collaborative and Outdoor Scenarios
Dense visual simultaneous localisation and mapping (SLAM) systems can produce 3D
reconstructions that are digital facsimiles of the physical space they describe. Systems that
can produce dense maps with this level of fidelity in real time provide foundational spatial
reasoning capabilities for many downstream tasks in autonomous robotics. Over the past
15 years, mapping small scale, indoor environments, such as desks and buildings, with a
single slow moving, hand-held sensor has been one of the central focuses of dense visual
SLAM research.
However, most dense visual SLAM systems exhibit a number of limitations which
mean they cannot be directly applied in collaborative or outdoors settings. The contribution
of this thesis is to address these limitations with the development of new systems and
algorithms for collaborative dense mapping, efficient dense alternation and outdoors
operation with fast camera motion and wide field of view (FOV) cameras. We use
ElasticFusion, a state-of-the-art dense SLAM system, as our starting point where each of
these contributions is implemented as a novel extension to the system.
We first present a collaborative dense SLAM system that allows a number of
cameras starting with unknown initial relative positions to maintain local maps with the
original ElasticFusion algorithm. Visual place recognition across local maps results in
constraints that allow maps to be aligned into a common global reference frame, facilitating
collaborative mapping and tracking of multiple cameras within a shared map.
Within dense alternation based SLAM systems, the standard approach is to fuse
every frame into the dense model without considering whether the information contained
within the frame is already captured by the dense map and therefore redundant. As the
number of cameras or the scale of the map increases, this approach becomes inefficient. In
our second contribution, we address this inefficiency by introducing a novel information
theoretic approach to keyframe selection that allows the system to avoid processing
redundant information. We implement the procedure within ElasticFusion, demonstrating
a marked reduction in the number of frames required by the system to estimate an accurate,
denoised surface reconstruction.
Before dense SLAM techniques can be applied in outdoor scenarios we must
first address their reliance on active depth cameras, and their lack of suitability to fast
camera motion. In our third contribution we present an outdoor dense SLAM system. The system overcomes the need for an active sensor by employing neural network-based depth
inference to predict the geometry of the scene as it appears in each image. To address the
issue of camera tracking during fast motion we employ a hybrid architecture, combining
elements of both dense and sparse SLAM systems to perform camera tracking and to
achieve globally consistent dense mapping.
Automotive applications present a particularly important setting for dense visual
SLAM systems. Such applications are characterised by their use of wide FOV cameras and
are therefore not accurately modelled by the standard pinhole camera model. The fourth
contribution of this thesis is to extend the above hybrid sparse-dense monocular SLAM
system to cater for large FOV fisheye imagery. This is achieved by reformulating the
mapping pipeline in terms of the Kannala-Brandt fisheye camera model. To estimate depth,
we introduce a new version of the PackNet depth estimation neural network (Guizilini et
al., 2020) adapted for fisheye inputs.
To demonstrate the effectiveness of our contributions, we present experimental
results, computed by processing the synthetic ICL-NUIM dataset of Handa et al. (2014) as
well as the real-world TUM-RGBD dataset of Sturm et al. (2012). For outdoor SLAM we
show the results of our system processing the autonomous driving KITTI and KITTI-360
datasets of Geiger et al. (2012a) and Liao et al. (2021) respectively