538 research outputs found

    Why we fell out of love with algorithms inspired by nature

    Get PDF
    First paragraph: While computers are poor at creativity, they are adept at crunching through vast numbers of solutions to modern problems where there are numerous complex variables at play. Take the question of finding the best delivery plan for a distribution company – where best to begin? How many vehicles? Which stretches of road need to be avoided at which times? If you want to get close to a sensible answer, you need to ask a computer. Access this article on The Conversation website: https://theconversation.com/why-we-fell-out-of-love-with-algorithms-inspired-by-nature-4271

    Connecting automatic parameter tuning, genetic programming as a hyper-heuristic and genetic improvement programming

    Get PDF
    Automatically designing algorithms has long been a dream of computer scientists. Early attempts which generate computer programs from scratch, have failed to meet this goal. However, in recent years there have been a number of different technologies with an alternative goal of taking existing programs and attempting to improvement them.  These methods form a continuum of methodologies, from the “limited” ability to change (for example only the parameters) to the “complete” ability to change the whole program. These include; automatic parameter tuning (APT), using GP as a hyper-heuristic (GPHH) to automatically design algorithms, and GI, which we will now briefly review. Part of research is building links between existing work, and the aim of this paper is to bring together these currently separate approache

    Metaheuristic Design Pattern: Surrogate Fitness Functions

    Get PDF
    Certain problems have characteristics that present difficulties for metaheuristics: their objective function may be either prohibitively expensive, or they may only give a partial ordering over the solutions, lacking a suitable gradient to guide the search. In such cases, it may be more efficient to use a surrogate fitness function to replace or supplement the objective function. This paper provides a broad perspective on surrogate fitness functions, described in the form of a metaheuristic design pattern

    GP vs GI: if you can't beat them, join them

    Get PDF
    Genetic Programming (GP) has been criticized for targeting irrelevant problems [12], and is also true of the wider machine learning community [11]. which has become detached from the source of the data it is using to drive the field forward. However, recently GI provides a fresh perspective on automated programming. In contrast to GP, GI begins with existing software, and therefore immediately has the aim of tackling real software. As evolution is the main approach to GI to manipulating programs, this connection with real software should persuade the GP community to confront the issues around what it originally set out to tackle i.e. evolving real software

    Evals is not enough: why we should report wall-clock time

    Get PDF
    Have you ever noticed that your car never achieves the fuel economy claimed by the manufacturer? Does this seem unfair? Unscientific? Would you like the same situation to occur in Genetic Improvement? Comparison will always be difficult [9], however, guidelines have been discussed [3, 5, 4]. With two GP [8] approaches, comparing the number of evaluations of the fitness function is reasonably fair. This means you are comparing the GP systems, and not how well they are implemented, how fast the language is. However, the situation with GI [6, 1] is unique. With GI we will typically compare systems which are applied to the same application written in the same language (i.e. a GI systems targeted at Java, may not even be applied to C). Thus, wall-clock time becomes more relevant. Thus, this paper asks if reporting number of evaluations is enough, or if wall-clock time is also important, particularly in the context of GI. It argues that reporting time is even more important when doing GI when compared to traditional GP

    Relating Training Instances to Automatic Design of Algorithms for Bin Packing via Features (Detailed Experiments and Results)

    Get PDF
    Automatic Design of Algorithms (ADA) shifts the burden of algorithm choice and design from developer to machine. Constructing an appropriate solver from a set of problem instances becomes a machine learning problem, with instances as training data. An efficient solver is trained for unseen problem instances with similar characteristics to those in the training set. However, this paper reveals that, as with classification and regression, for ADA not all training sets are equally valuable. We apply a typical genetic programming ADA approach for bin packing problems to several new and existing public benchmark sets. Algorithms trained on some sets are general and apply well to most others, whereas some training sets result in highly specialised algorithms that do not generalise. We relate these findings to features (simple metrics) of instances. Using instance sets with narrowly-distributed features for training results in highly specialised algorithms, whereas those with well-spread features result in very general algorithms. We show that variance in certain features has a strong correlation with the generality of the trained policies. Our results provide further grounding for recent work using features to predict algorithm performance, and show the suitability of particular instance sets for training in ADA for bin packing. The data sets, including all computed features, the evolved policies, and their performances, and the visualisations for all feature sets, are available from http://hdl.handle.net/11667/108.Work funded by UK EPSRC [grants EP/N002849/1, EP/J017515/1]. Results obtained using the EPSRC funded ARCHIE-WeSt HPC [EPSRC grant EP/K000586/1]

    Computers will soon be able to fix themselves – are IT departments for the chop?

    Get PDF
    First paragraph: Robots and AI are replacing workers at analarming rate, from simple manual tasks to making complex legal decisions and medical diagnoses. But the AI itself, and indeed most software, is still largely programmed by humans. Yet there are signs that this might be changing. Several programming tools are emerging which help to automate software testing, one of which we have been developing ourselves. The prospects look exciting; but it raises questions about how far this will encroach on the profession. Could we be looking at a world of Terminator-like software writers who consign their human counterparts to the dole queue? We computer programmers devote an unholy amount of time to testing software and fixing bugs. It’s costly, time consuming and fiddly – yet it’s vital if you want to bring high quality software to market
    • …
    corecore