7,648 research outputs found
Genetic Programming for Smart Phone Personalisation
Personalisation in smart phones requires adaptability to dynamic context
based on user mobility, application usage and sensor inputs. Current
personalisation approaches, which rely on static logic that is developed a
priori, do not provide sufficient adaptability to dynamic and unexpected
context. This paper proposes genetic programming (GP), which can evolve program
logic in realtime, as an online learning method to deal with the highly dynamic
context in smart phone personalisation. We introduce the concept of
collaborative smart phone personalisation through the GP Island Model, in order
to exploit shared context among co-located phone users and reduce convergence
time. We implement these concepts on real smartphones to demonstrate the
capability of personalisation through GP and to explore the benefits of the
Island Model. Our empirical evaluations on two example applications confirm
that the Island Model can reduce convergence time by up to two-thirds over
standalone GP personalisation.Comment: 43 pages, 11 figure
Evolutionary improvement of programs
Most applications of genetic programming (GP) involve the creation of an entirely new function, program or expression to solve a specific problem. In this paper, we propose a new approach that applies GP to improve existing software by optimizing its non-functional properties such as execution time, memory usage, or power consumption. In general, satisfying non-functional requirements is a difficult task and often achieved in part by optimizing compilers. However, modern compilers are in general not always able to produce semantically equivalent alternatives that optimize non-functional properties, even if such alternatives are known to exist: this is usually due to the limited local nature of such optimizations. In this paper, we discuss how best to combine and extend the existing evolutionary methods of GP, multiobjective optimization, and coevolution in order to improve existing software. Given as input the implementation of a function, we attempt to evolve a semantically equivalent version, in this case optimized to reduce execution time subject to a given probability distribution of inputs. We demonstrate that our framework is able to produce non-obvious optimizations that compilers are not yet able to generate on eight example functions. We employ a coevolved population of test cases to encourage the preservation of the function's semantics. We exploit the original program both through seeding of the population in order to focus the search, and as an oracle for testing purposes. As well as discussing the issues that arise when attempting to improve software, we employ rigorous experimental method to provide interesting and practical insights to suggest how to address these issues
GP-HD: Using Genetic Programming to Generate Dynamical Systems Models for Health Care
The huge wealth of data in the health domain can be exploited to create
models that predict development of health states over time. Temporal learning
algorithms are well suited to learn relationships between health states and
make predictions about their future developments. However, these algorithms:
(1) either focus on learning one generic model for all patients, providing
general insights but often with limited predictive performance, or (2) learn
individualized models from which it is hard to derive generic concepts. In this
paper, we present a middle ground, namely parameterized dynamical systems
models that are generated from data using a Genetic Programming (GP) framework.
A fitness function suitable for the health domain is exploited. An evaluation
of the approach in the mental health domain shows that performance of the model
generated by the GP is on par with a dynamical systems model developed based on
domain knowledge, significantly outperforms a generic Long Term Short Term
Memory (LSTM) model and in some cases also outperforms an individualized LSTM
model
Learning to solve planning problems efficiently by means of genetic programming
Declarative problem solving, such as planning, poses interesting challenges for Genetic Programming (GP). There have been recent attempts to apply GP to planning that fit two approaches: (a) using GP to search in plan space or (b) to evolve a planner. In this article, we propose to evolve only the heuristics to make a particular planner more efficient. This approach is more feasible than (b) because it does not have to build a planner from scratch but can take advantage of already existing planning systems. It is also more efficient than (a) because once the heuristics have been evolved, they can be used to solve a whole class of different planning problems in a planning domain, instead of running GP for every new planning problem. Empirical results show that our approach (EVOCK) is able to evolve heuristics in two planning domains (the blocks world and the logistics domain) that improve PRODIGY4.0 performance. Additionally, we experiment with a new genetic operator - Instance-Based Crossover - that is able to use traces of the base planner as raw genetic material to be injected into the evolving population.Publicad
How Noisy Data Affects Geometric Semantic Genetic Programming
Noise is a consequence of acquiring and pre-processing data from the
environment, and shows fluctuations from different sources---e.g., from
sensors, signal processing technology or even human error. As a machine
learning technique, Genetic Programming (GP) is not immune to this problem,
which the field has frequently addressed. Recently, Geometric Semantic Genetic
Programming (GSGP), a semantic-aware branch of GP, has shown robustness and
high generalization capability. Researchers believe these characteristics may
be associated with a lower sensibility to noisy data. However, there is no
systematic study on this matter. This paper performs a deep analysis of the
GSGP performance over the presence of noise. Using 15 synthetic datasets where
noise can be controlled, we added different ratios of noise to the data and
compared the results obtained with those of a canonical GP. The results show
that, as we increase the percentage of noisy instances, the generalization
performance degradation is more pronounced in GSGP than GP. However, in
general, GSGP is more robust to noise than GP in the presence of up to 10% of
noise, and presents no statistical difference for values higher than that in
the test bed.Comment: 8 pages, In proceedings of Genetic and Evolutionary Computation
Conference (GECCO 2017), Berlin, German
- …