2 research outputs found
Scaling Up Cartesian Genetic Programming through Preferential Selection of Larger Solutions
We demonstrate how efficiency of Cartesian Genetic Programming method can be
scaled up through the preferential selection of phenotypically larger
solutions, i.e. through the preferential selection of larger solutions among
equally good solutions. The advantage of the preferential selection of larger
solutions is validated on the six, seven and eight-bit parity problems, on a
dynamically varying problem involving the classification of binary patterns,
and on the Paige regression problem. In all cases, the preferential selection
of larger solutions provides an advantage in term of the performance of the
evolved solutions and in term of speed, the number of evaluations required to
evolve optimal or high-quality solutions. The advantage provided by the
preferential selection of larger solutions can be further extended by
self-adapting the mutation rate through the one-fifth success rule. Finally,
for problems like the Paige regression in which neutrality plays a minor role,
the advantage of the preferential selection of larger solutions can be extended
by preferring larger solutions also among quasi-neutral alternative candidate
solutions, i.e. solutions achieving slightly different performance
Neural Architecture Search based on Cartesian Genetic Programming Coding Method
Neural architecture search (NAS) is a hot topic in the field of automated
machine learning and outperforms humans in designing neural architectures on
quite a few machine learning tasks. Motivated by the natural representation
form of neural networks by the Cartesian genetic programming (CGP), we propose
an evolutionary approach of NAS based on CGP, called CGPNAS, to solve sentence
classification task. To evolve the architectures under the framework of CGP,
the operations such as convolution are identified as the types of function
nodes of CGP, and the evolutionary operations are designed based on
Evolutionary Strategy. The experimental results show that the searched
architectures are comparable with the performance of human-designed
architectures. We verify the ability of domain transfer of our evolved
architectures. The transfer experimental results show that the accuracy
deterioration is lower than 2-5%. Finally, the ablation study identifies the
Attention function as the single key function node and the linear
transformations along could keep the accuracy similar with the full evolved
architectures, which is worthy of investigation in the future