Addressing Collective Computations Efficiency: Towards a Platform-level Reinforcement Learning Approach

Abstract

Aggregate Computing is a macro-level approach for programming collective intelligence and self-organisation in distributed systems. In this paradigm, system behaviour unfolds as a combination of a system-wide program, functionally manipulating distributed data structures called computational fields, and a distributed protocol where devices work at asynchronous rounds comprising sense-compute-interact steps. Interestingly, there exists a large amount of flexibility in how aggregate systems could actually execute while preserving the desired functionality. The ideal place for making choices about execution is the aggregate computing platform (or middleware), which can be engineered with the goal of promoting efficiency and other non-functional goals. In this work, we explore the possibility of applying Reinforcement Learning at the platform level in order to optimise aspects of a collective computation while achieving coherent functional goals. This idea is substantiated through synthetic experiments of data propagation and collection, where we show how Q-Learning could reduce the power consumption of aggregate computations

    Similar works