5 research outputs found

    The Seamless Peer and Cloud Evolution Framework

    Get PDF
    Evolutionary algorithms are increasingly being applied to problems that are too computationally expensive to run on a single personal computer due to costly fitness function evaluations and/or large numbers of fitness evaluations. Here, we introduce the Seamless Peer And Cloud Evolution (SPACE) framework, which leverages bleeding edge web technologies to allow the computational resources necessary for running large scale evolutionary experiments to be made available to amateur and professional researchers alike, in a scalable and cost-effective manner, directly from their web browsers. The SPACE framework accomplishes this by distributing fitness evaluations across a heterogeneous pool of cloud compute nodes and peer computers. As a proof of concept, this framework has been attached to the RoboGen open-source platform for the co-evolution of robot bodies and brains, but importantly the framework has been built in a modular fashion such that it can be easily coupled with other evolutionary computation systems

    Genet: A Quickly Scalable Fat-Tree Overlay for Personal Volunteer Computing using WebRTC

    Full text link
    WebRTC enables browsers to exchange data directly but the number of possible concurrent connections to a single source is limited. We overcome the limitation by organizing participants in a fat-tree overlay: when the maximum number of connections of a tree node is reached, the new participants connect to the node's children. Our design quickly scales when a large number of participants join in a short amount of time, by relying on a novel scheme that only requires local information to route connection messages: the destination is derived from the hash value of the combined identifiers of the message's source and of the node that is holding the message. The scheme provides deterministic routing of a sequence of connection messages from a single source and probabilistic balancing of newer connections among the leaves. We show that this design puts at least 83% of nodes at the same depth as a deterministic algorithm, can connect a thousand browser windows in 21-55 seconds in a local network, and can be deployed for volunteer computing to tap into 320 cores in less than 30 seconds on a local network to increase the total throughput on the Collatz application by two orders of magnitude compared to a single core

    Pando: Personal Volunteer Computing in Browsers

    Full text link
    The large penetration and continued growth in ownership of personal electronic devices represents a freely available and largely untapped source of computing power. To leverage those, we present Pando, a new volunteer computing tool based on a declarative concurrent programming model and implemented using JavaScript, WebRTC, and WebSockets. This tool enables a dynamically varying number of failure-prone personal devices contributed by volunteers to parallelize the application of a function on a stream of values, by using the devices' browsers. We show that Pando can provide throughput improvements compared to a single personal device, on a variety of compute-bound applications including animation rendering and image processing. We also show the flexibility of our approach by deploying Pando on personal devices connected over a local network, on Grid5000, a French-wide computing grid in a virtual private network, and seven PlanetLab nodes distributed in a wide area network over Europe.Comment: 14 pages, 12 figures, 2 table

    Value-Based Manufacturing Optimisation in Serverless Clouds for Industry 4.0

    Get PDF
    There is increasing impetus towards Industry 4.0, a recently proposed roadmap for process automation across a broad spectrum of manufacturing industries. The proposed approach uses Evolutionary Computation to optimise real-world metrics. Features of the proposed approach are that it is generic (i.e. applicable across multiple problem domains) and decentralised, i.e. hosted remotely from the physical system upon which it operates. In particular, by virtue of being serverless, the project goal is that computation can be performed `just in time' in a scalable fashion. We describe a case study for value-based optimisation, applicable to a wide range of manufacturing processes. In particular, value is expressed in terms of Overall Equipment Effectiveness (OEE), grounded in monetary units. We propose a novel online stopping condition that takes into account the predicted utility of further computational effort. We apply this method to scheduling problems in the (max,+) algebra, and compare against a baseline stopping criterion with no prediction mechanism. Near optimal profit is obtained by the proposed approach, across multiple problem instances

    Parallel genetic algorithms in the cloud

    Get PDF
    2015 - 2016Genetic Algorithms (GAs) are a metaheuristic search technique belonging to the class of Evolutionary Algorithms (EAs). They have been proven to be effective in addressing several problems in many fields but also suffer from scalability issues that may not let them find a valid application for real world problems. Thus, the aim of providing highly scalable GA-based solutions, together with the reduced costs of parallel architectures, motivate the research on Parallel Genetic Algorithms (PGAs). Cloud computing may be a valid option for parallelisation, since there is no need of owning the physical hardware, which can be purchased from cloud providers, for the desired time, quantity and quality. There are different employable cloud technologies and approaches for this purpose, but they all introduce communication overhead. Thus, one might wonder if, and possibly when, specific approaches, environments and models show better performance than sequential versions in terms of execution time and resource usage. This thesis investigates if and when GAs can scale in the cloud using specific approaches. Firstly, Hadoop MapReduce is exploited designing and developinganopensourceframework,i.e.,elephant56, thatreducestheeffortin developing and speed up GAs using three parallel models. The performance of theframeworkisthenevaluatedthroughanempiricalstudy. Secondly, software containers and message queues are employed to develop, deploy and execute PGAs in the cloud and the devised system is evaluated with an empirical study on a commercial cloud provider. Finally, cloud technologies are also exploredfortheparallelisationofotherEAs,designinganddevelopingcCube,a collaborativemicroservicesarchitectureformachinelearningproblems. [edited by author]I Genetic Algorithms (GAs) sono una metaeuristica di ricerca appartenenti alla classe degli Evolutionary Algorithms (EAs). Si sono dimostrati efficaci nel risolvere tanti problemi in svariati campi. Tuttavia, le difficoltà nello scalare spesso evitano che i GAs possano trovare una collocazione efficace per la risoluzione di problemi del mondo reale. Quindi, l’obiettivo di fornire soluzioni basate altamente scalabili, assieme alla riduzione dei costi di architetture parallele, motivano la ricerca sui Parallel Genetic Algorithms (PGAs). Il cloud computing potrebbe essere una valida opzione per la parallelizzazione, dato che non c’è necessità di possedere hardware fisico che può, invece, essere acquistato dai cloud provider, per il tempo desiderato, quantità e qualità. Esistono differenti tecnologie e approcci cloud impiegabili a tal proposito ma, tutti, introducono overhead di computazione. Quindi, ci si può chiedere se, e possibilmente quando, approcci specifici, ambienti e modelli mostrino migliori performance rispetto alle versioni sequenziali, in termini di tempo di esecuzione e uso di risorse. Questa tesi indaga se, e quando, i GAs possono scalare nel cloud utilizzando approcci specifici. Prima di tutto, Hadoop MapReduce è sfruttato per modellare e sviluppare un framework open source, i.e., elephant56, che riduce l’effort nello sviluppo e velocizza i GAs usando tre diversi modelli paralleli. Le performance del framework sono poi valutate attraverso uno studio empirico. Successivamente, i software container e le message queue sono impiegati per sviluppare, distribuire e eseguire PGAs e il sistema ideato valutato, attraverso uno studio empirico, su un cloud provider commerciale. Infine, le tecnologie cloud sono esplorate per la parallelizzazione di altri EAs, ideando e sviluppando cCube, un’architettura a microservizi collaborativa per risolvere problemi di machine learning. [a cura dell'autore]XV n.s
    corecore