4 research outputs found
Kinetically Stable Task Assignment for Networks of Microservers
Abstract — This paper studies task assignment in a network of resource constrained computing platforms (called microservers). A task is an abstraction of a computational agent or data that is hosted by the microservers. For example, in an object tracking scenario, a task represents a mobile tracking agent, such as a vehicle location update computation, that runs on microservers, which can receive sensor data pertaining to the object of interest. Due to object motion, the microservers that can observe a particular object change over time and there is overhead involved in migrating tasks among microservers. Furthermore, communication, processing, or memory constraints, allow a microserver to only serve a limited number of objects at the same time. Our overall goal is to assign tasks to microservers so as to minimize the number of migrations, and thus be kinetically stable, while guaranteeing that as many tasks as possible are monitored at all times. When the task trajectories are known in advance, we show that this problem is NPcomplete (even over just two time steps), has an integrality gap of at least 2, and can be solved optimally in polynomial time if we allow tasks to be assigned fractionally. When only probabilistic information about future movement of the tasks is known, we propose two algorithms: a multicommodity flow based algorithm and a maximum matching algorithm. We use simulations to compare the performance of these algorithms against the optimum task allocation strategy. I
Bandwidth-aware distributed ad-hoc grids in deployed wireless sensor networks
Nowadays, cost effective sensor networks can be deployed as a result of a plethora of recent engineering
advances in wireless technology, storage miniaturisation, consolidated microprocessor design, and
sensing technologies.
Whilst sensor systems are becoming relatively cheap to deploy, two issues arise in their typical
realisations: (i) the types of low-cost sensors often employed are capable of limited resolution and tend
to produce noisy data; (ii) network bandwidths are relatively low and the energetic costs of using the
radio to communicate are relatively high. To reduce the transmission of unnecessary data, there is a
strong argument for performing local computation. However, this can require greater computational
capacity than is available on a single low-power processor. Traditionally, such a problem has been
addressed by using load balancing: fragmenting processes into tasks and distributing them amongst the
least loaded nodes. However, the act of distributing tasks, and any subsequent communication between
them, imposes a geographically defined load on the network. Because of the shared broadcast nature of
the radio channels and MAC layers in common use, any communication within an area will be slowed by
additional traffic, delaying the computation and reporting that relied on the availability of the network.
In this dissertation, we explore the tradeoff between the distribution of computation, needed to enhance
the computational abilities of networks of resource-constrained nodes, and the creation of network
traffic that results from that distribution. We devise an application-independent distribution paradigm and
a set of load distribution algorithms to allow computationally intensive applications to be collaboratively
computed on resource-constrained devices. Then, we empirically investigate the effects of network
traffic information on the distribution performance. We thus devise bandwidth-aware task offload mechanisms
that, combining both nodes computational capabilities and local network conditions, investigate
the impacts of making informed offload decisions on system performance.
The highly deployment-specific nature of radio communication means that simulations that are
capable of producing validated, high-quality, results are extremely hard to construct. Consequently, to
produce meaningful results, our experiments have used empirical analysis based on a network of motes
located at UCL, running a variety of I/O-bound, CPU-bound and mixed tasks. Using this setup, we have
established that even relatively simple load sharing algorithms can improve performance over a range of
different artificially generated scenarios, with more or less timely contextual information. In addition,
we have taken a realistic application, based on location estimation, and implemented that across the same
network with results that support the conclusions drawn from the artificially generated traffic