9 research outputs found
Accurate and energy-efficient classification with spiking random neural network: corrected and expanded version
Artificial Neural Network (ANN) based techniques have dominated state-of-the-art results in most problems related to computer vision, audio recognition, and natural language processing in the past few years, resulting in strong industrial adoption from all leading technology companies worldwide. One of the major obstacles that have historically delayed large scale adoption of ANNs is the huge computational and power costs associated with training and testing (deploying) them. In the mean-time, Neuromorphic Computing platforms have recently achieved remarkable performance running more bio-realistic Spiking Neural Networks at high throughput and very low power consumption making them a natural alternative to ANNs. Here, we propose using the Random Neural Network (RNN), a spiking neural network with both theoretical and practical appealing properties, as a general purpose classifier that can match the classification power of ANNs on a number of tasks while enjoying all the features of a spiking neural network. This is demonstrated on a number of real-world classification datasets
RL-QN: A Reinforcement Learning Framework for Optimal Control of Queueing Systems
With the rapid advance of information technology, network systems have become
increasingly complex and hence the underlying system dynamics are often unknown
or difficult to characterize. Finding a good network control policy is of
significant importance to achieve desirable network performance (e.g., high
throughput or low delay). In this work, we consider using model-based
reinforcement learning (RL) to learn the optimal control policy for queueing
networks so that the average job delay (or equivalently the average queue
backlog) is minimized. Traditional approaches in RL, however, cannot handle the
unbounded state spaces of the network control problem. To overcome this
difficulty, we propose a new algorithm, called Reinforcement Learning for
Queueing Networks (RL-QN), which applies model-based RL methods over a finite
subset of the state space, while applying a known stabilizing policy for the
rest of the states. We establish that the average queue backlog under RL-QN
with an appropriately constructed subset can be arbitrarily close to the
optimal result. We evaluate RL-QN in dynamic server allocation, routing and
switching problems. Simulation results show that RL-QN minimizes the average
queue backlog effectively
Recommended from our members
Performance improvement for mobile ad hoc cognitive packets network
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonIn this thesis, focusing on the quality of service (QoS) improvement using per-packet power control
algorithm in Ad Hoc Cognitive Packet Networks (AHCPN). A power control mechanism creates as a
network-assisted function of ad hoc cognitive packet-based routing and aims at reducing both energy
consumption in nodes and QoS requirements. The suggested models facilitate transmission power
adjustments while also taking into account the effects on network performance.
The thesis concentrate on three main contributions. Firstly, a power control algorithm, namely the
adaptive Distributed Power management algorithm (DISPOW) was adopted. Performance of DISPOW
was compared to existing mechanisms and the results showed 27, 13, 9, and 40 percent improvements
in terms of Delay, Throughput, Packet Loss, and Energy Consumption respectively.
Secondly, the DISPOW algorithm was enhanced, namely a Link Expiration Time Aware Distributed
Power management algorithm (LETPOW). This approach periodically checks connectivity, transmission
power, interference level, routing overhead and Node Mobility in AHCPN. The results show
that LETPOW algorithm improves the performance of system. Results show further improvement
from DISPOW by 30,25,30,42 percent in terms of delay, packet loss ratio , path lengths and energy
consumption respectively.
Finally,Hybrid Power Control Algorithm (HLPCA) has presented is a combination of Link Expiration
Time Aware Distributed Power management algorithm (LETPOW) and Load Power Control
Algorithm (LOADPOW); deal with cross-layer power control applied for transmitting information
across the various intermediate layers. LOADPOW emphasis on the concept of transmission Power,
Received Signal Strength Indication (RSSI), and the suitable distance between the receiver and the
sender. The proposed algorithm outperforms DISPOW and LETPOW by 31,15,35,34,44 percent in
terms of Delay, Throughput, Packet Loss,path length and Energy Consumption respectively. From
this work, it can be concluded that optimized power control algorithm applied to Ad-hoc cognitive
packet network results in significant improvement in terms of energy consumption and QoS
Traffic and task allocation in networks and the cloud
Communication services such as telephony, broadband and TV are increasingly migrating into Internet Protocol(IP) based networks because of the consolidation of telephone and data networks. Meanwhile, the increasingly wide application of Cloud Computing enables the accommodation of tens of thousands of applications from the general public or enterprise users which make use of Cloud services on-demand through IP networks such as the Internet. Real-Time services over IP (RTIP) have also been increasingly significant due to the convergence of network services, and the real-time needs of the Internet of Things (IoT) will strengthen this trend. Such Real-Time applications have strict Quality of Service (QoS) constraints, posing a major challenge for IP networks. The Cognitive Packet Network (CPN) has been designed as a QoS-driven protocol that addresses user-oriented QoS demands by adaptively routing packets based on online sensing and measurement. Thus in this thesis we first describe our design for a novel ``Real-Time (RT) traffic over CPN'' protocol which uses QoS goals that match the needs of voice packet delivery in the presence of other background traffic under varied traffic conditions; we present its experimental evaluation via measurements of key QoS metrics such as packet delay, delay variation (jitter) and packet loss ratio. Pursuing our investigation of packet routing in the Internet, we then propose a novel Big Data and Machine Learning approach for real-time Internet scale Route Optimisation based on Quality-of-Service using an overlay network, and evaluate is performance. Based on the collection of data sampled each minutes over a large number of source-destinations pairs, we observe that intercontinental Internet Protocol (IP) paths are far from optimal with respect to metrics such as end-to-end round-trip delay. On the other hand, our machine learning based overlay network routing scheme exploits large scale data collected from communicating node pairs to select overlay paths, while it uses IP between neighbouring overlay nodes. We report measurements over a week long experiment with several million data points shows substantially better end-to-end QoS than is observed with pure IP routing. Pursuing the machine learning approach, we then address the challenging problem of dispatching incoming tasks to servers in Cloud systems so as to offer the best QoS and reliable job execution; an experimental system (the Task Allocation Platform) that we have developed is presented and used to compare several task allocation schemes, including a model driven algorithm, a reinforcement learning based scheme, and a ``sensible’’ allocation algorithm that assigns tasks to sub-systems that are observed to provide lower response time. These schemes are compared via measurements both among themselves and against a standard round-robin scheduler, with two architectures (with homogenous and heterogenous hosts having different processing capacities) and the conditions under which the different schemes offer better QoS are discussed. Since Cloud systems include both locally based servers at user premises and remote servers and multiple Clouds that can be reached over the Internet, we also describe a smart distributed system that combines local and remote Cloud facilities, allocating tasks dynamically to the service that offers the best overall QoS, and it includes a routing overlay which minimizes network delay for data transfer between Clouds. Internet-scale experiments that we report exhibit the effectiveness of our approach in adaptively distributing workload across multiple Clouds.Open Acces
Big Data for Autonomic Intercontinental Overlays
International audienceThis paper uses Big Data and Machine Learning for the real-time management of Internet scale Quality-of-Service Route Optimisation with an overlay network. Based on the collection of data sampled each 2 minutes over a large number of source-destinations pairs, we show that intercontinental Internet Protocol (IP) paths are far from optimal with respect to Quality of Service (QoS) metrics such as end-to-end round-trip delay. We therefore develop a machine learning based scheme that exploits large scale data collected from communicating node pairs in a multi-hop overlay network that uses IP between the overlay nodes, and selects paths that provide substantially better QoS than IP. Inspired from Cognitive Packet Network protocol, it uses Random Neural Networks with Reinforcement Learning based on the massive data that is collected, to select intermediate overlay hops. The routing scheme is illustrated on a 20-node intercontinental overlay network that collects some 2 Ă— 10^6 measurements per week, and makes scalable distributed routing decisions. Experimental results show that this approach improves QoS significantly and efficiently