32 research outputs found

    Improving energy consumption of commercial building with IoT and machine learning

    Get PDF

    Cloud Enabled Emergency Navigation Using Faster-than-real-time Simulation

    Full text link
    State-of-the-art emergency navigation approaches are designed to evacuate civilians during a disaster based on real-time decisions using a pre-defined algorithm and live sensory data. Hence, casualties caused by the poor decisions and guidance are only apparent at the end of the evacuation process and cannot then be remedied. Previous research shows that the performance of routing algorithms for evacuation purposes are sensitive to the initial distribution of evacuees, the occupancy levels, the type of disaster and its as well its locations. Thus an algorithm that performs well in one scenario may achieve bad results in another scenario. This problem is especially serious in heuristic-based routing algorithms for evacuees where results are affected by the choice of certain parameters. Therefore, this paper proposes a simulation-based evacuee routing algorithm that optimises evacuation by making use of the high computational power of cloud servers. Rather than guiding evacuees with a predetermined routing algorithm, a robust Cognitive Packet Network based algorithm is first evaluated via a cloud-based simulator in a faster-than-real-time manner, and any "simulated casualties" are then re-routed using a variant of Dijkstra's algorithm to obtain new safe paths for them to exits. This approach can be iterated as long as corrective action is still possible.Comment: Submitted to PerNEM'15 for revie

    Adaptive Dispatching of Tasks in the Cloud

    Full text link
    The increasingly wide application of Cloud Computing enables the consolidation of tens of thousands of applications in shared infrastructures. Thus, meeting the quality of service requirements of so many diverse applications in such shared resource environments has become a real challenge, especially since the characteristics and workload of applications differ widely and may change over time. This paper presents an experimental system that can exploit a variety of online quality of service aware adaptive task allocation schemes, and three such schemes are designed and compared. These are a measurement driven algorithm that uses reinforcement learning, secondly a "sensible" allocation algorithm that assigns jobs to sub-systems that are observed to provide a lower response time, and then an algorithm that splits the job arrival stream into sub-streams at rates computed from the hosts' processing capabilities. All of these schemes are compared via measurements among themselves and with a simple round-robin scheduler, on two experimental test-beds with homogeneous and heterogeneous hosts having different processing capacities.Comment: 10 pages, 9 figure

    Intelligent intrusion detection in low power IoTs

    Get PDF
    Security and privacy of data are one of the prime concerns in today’s Internet of Things (IoT). Conventional security techniques like signature-based detection of malware and regular updates of a signature database are not feasible solutions as they cannot secure such systems effectively, having limited resources. Programming languages permitting immediate memory accesses through pointers often result in applications having memory-related errors, which may lead to unpredictable failures and security vulnerabilities. Furthermore, energy efficient IoT devices running on batteries cannot afford the implementation of cryptography algorithms as such techniques have significant impact on the system power consumption. Therefore, in order to operate IoT in a secure manner, the system must be able to detect and prevent any kind of intrusions before the network (i.e., sensor nodes and base station) is destabilised by the attackers. In this article, we have presented an intrusion detection and prevention mechanism by implementing an intelligent security architecture using random neural networks (RNNs). The application’s source code is also instrumented at compile time in order to detect out-of-bound memory accesses. It is based on creating tags, to be coupled with each memory allocation and then placing additional tag checking instructions for each access made to the memory. To validate the feasibility of the proposed security solution, it is implemented for an existing IoT system and its functionality is practically demonstrated by successfully detecting the presence of any suspicious sensor node within the system operating range and anomalous activity in the base station with an accuracy of 97.23%. Overall, the proposed security solution has presented a minimal performance overhead.</jats:p

    Deep learning in multi-layer architectures of dense nuclei

    Get PDF
    We assume that, within the dense clusters of neurons that can be found in nuclei, cells may interconnect via soma-to-soma interactions, in addition to conventional synaptic connections. We illustrate this idea with a multi-layer architecture (MLA) composed of multiple clusters of recurrent sub-networks of spiking Random Neural Networks (RNN) with dense soma-to-soma interactions, and use this RNN-MLA architecture for deep learning. The inputs to the clusters are first normalised by adjusting the external arrival rates of spikes to each cluster. Then we apply this architecture to learning from multi-channel datasets. Numerical results based on both images and sensor based data, show the value of this novel architecture for deep learning

    Nonnegative autoencoder with simplified random neural network

    Get PDF
    This paper proposes new nonnegative (shallow and multi-layer) autoencoders by combining the spiking Random Neural Network (RNN) model, the network architecture typical used in deep-learning area and the training technique inspired from nonnegative matrix factorization (NMF). The shallow autoencoder is a simplified RNN model, which is then stacked into a multi-layer architecture. The learning algorithm is based on the weight update rules in NMF, subject to the nonnegative probability constraints of the RNN. The autoencoders equipped with this learning algorithm are tested on typical image datasets including the MNIST, Yale face and CIFAR-10 datasets, and also using 16 real-world datasets from different areas. The results obtained through these tests yield the desired high learning and recognition accuracy. Also, numerical simulations of the stochastic spiking behavior of this RNN auto encoder, show that it can be implemented in a highly-distributed manner

    Single-Cell Based Random Neural Network for Deep Learning

    Get PDF
    Recent work demonstrated the value of multi clusters of spiking Random Neural Networks (RNN) with dense soma-to-soma interactions in deep learning. In this paper we go back to the original simpler structure and we investigate the power of single RNN cells for deep learning. First, we consider three approaches with the single cells, twin cells and multi-cell clusters. This first part shows that RNNs with only positive parameter can conduct convolution operations similar to those of the convolutional neural network. We then develop a multi-layer architecture of single cell RNNs (MLSRNN), and show that this architecture achieves comparable or better classification at lower computation cost than conventional deep-learning methods
    corecore