477 research outputs found

    Traffic Prediction Based on Random Connectivity in Deep Learning with Long Short-Term Memory

    Full text link
    Traffic prediction plays an important role in evaluating the performance of telecommunication networks and attracts intense research interests. A significant number of algorithms and models have been put forward to analyse traffic data and make prediction. In the recent big data era, deep learning has been exploited to mine the profound information hidden in the data. In particular, Long Short-Term Memory (LSTM), one kind of Recurrent Neural Network (RNN) schemes, has attracted a lot of attentions due to its capability of processing the long-range dependency embedded in the sequential traffic data. However, LSTM has considerable computational cost, which can not be tolerated in tasks with stringent latency requirement. In this paper, we propose a deep learning model based on LSTM, called Random Connectivity LSTM (RCLSTM). Compared to the conventional LSTM, RCLSTM makes a notable breakthrough in the formation of neural network, which is that the neurons are connected in a stochastic manner rather than full connected. So, the RCLSTM, with certain intrinsic sparsity, have many neural connections absent (distinguished from the full connectivity) and which leads to the reduction of the parameters to be trained and the computational cost. We apply the RCLSTM to predict traffic and validate that the RCLSTM with even 35% neural connectivity still shows a satisfactory performance. When we gradually add training samples, the performance of RCLSTM becomes increasingly closer to the baseline LSTM. Moreover, for the input traffic sequences of enough length, the RCLSTM exhibits even superior prediction accuracy than the baseline LSTM.Comment: 6 pages, 9 figure

    An Intrusion Detection Using Machine Learning Algorithm Multi-Layer Perceptron (MlP): A Classification Enhancement in Wireless Sensor Network (WSN)

    Get PDF
    During several decades, there has been a meteoric rise in the development and use of cutting-edge technology. The Wireless Sensor Network (WSN) is a groundbreaking innovation that relies on a vast network of individual sensor nodes. The sensor nodes in the network are responsible for collecting data and uploading it to the cloud. When networks with little resources are deployed harshly and without regulation, security risks occur. Since the rate at which new information is being generated is increasing at an exponential rate, WSN communication has become the most challenging and complex aspect of the field. Therefore, WSNs are insecure because of this. With so much riding on WSN applications, accuracy in replies is paramount. Technology that can swiftly and continually analyse internet data streams is essential for spotting breaches and assaults. Without categorization, it is hard to simultaneously reduce processing time while maintaining a high level of detection accuracy. This paper proposed using a Multi-Layer Perceptron (MLP) to enhance the classification accuracy of a system. The proposed method utilises a feed-forward ANN model to generate a mapping for the training and testing datasets using backpropagation. Experiments are performed to determine how well the proposed MLP works. Then, the results are compared to those obtained by using the Hoeffding adaptive tree method and the Restricted Boltzmann Machine-based Clustered-Introduction Detection System. The proposed MLP achieves 98% accuracy, which is higher than the 96.33% achieved by the RBMC-IDS and the 97% accuracy achieved by the Hoeffding adaptive tree

    Deployment of an agent-based SANET architecture for healthcare services

    Full text link
    This paper describes the adaptation of a computational technique utilizing Extended Kohonen Maps (EKMs) and Rao-Blackwell-Kolmogorov (R-B) Filtering mechanisms for the administration of Sensor-Actuator networks (SANETs). Inspired by the BDI (Belief-Desire-Intention) Agent model from Rao and Georgeff, EKMs perform the quantitative analysis of an algorithmic artificial neural network process by using an indirect-mapping EKM to self-organize, while the Rao-Blackwell filtering mechanism reduces the external noise and interference in the problem set introduced through the self-organization process. Initial results demonstrate that a combinatorial approach to optimization with EKMs and Rao-Blackwell filtering provides an improvement in event trajectory approximation in comparison to standalone cooperative EKM processes to allow responsive event detection and optimization in patient healthcare

    Compiler and Architecture Design for Coarse-Grained Programmable Accelerators

    Get PDF
    abstract: The holy grail of computer hardware across all market segments has been to sustain performance improvement at the same pace as silicon technology scales. As the technology scales and the size of transistors shrinks, the power consumption and energy usage per transistor decrease. On the other hand, the transistor density increases significantly by technology scaling. Due to technology factors, the reduction in power consumption per transistor is not sufficient to offset the increase in power consumption per unit area. Therefore, to improve performance, increasing energy-efficiency must be addressed at all design levels from circuit level to application and algorithm levels. At architectural level, one promising approach is to populate the system with hardware accelerators each optimized for a specific task. One drawback of hardware accelerators is that they are not programmable. Therefore, their utilization can be low as they perform one specific function. Using software programmable accelerators is an alternative approach to achieve high energy-efficiency and programmability. Due to intrinsic characteristics of software accelerators, they can exploit both instruction level parallelism and data level parallelism. Coarse-Grained Reconfigurable Architecture (CGRA) is a software programmable accelerator consists of a number of word-level functional units. Motivated by promising characteristics of software programmable accelerators, the potentials of CGRAs in future computing platforms is studied and an end-to-end CGRA research framework is developed. This framework consists of three different aspects: CGRA architectural design, integration in a computing system, and CGRA compiler. First, the design and implementation of a CGRA and its instruction set is presented. This design is then modeled in a cycle accurate system simulator. The simulation platform enables us to investigate several problems associated with a CGRA when it is deployed as an accelerator in a computing system. Next, the problem of mapping a compute intensive region of a program to CGRAs is formulated. From this formulation, several efficient algorithms are developed which effectively utilize CGRA scarce resources very well to minimize the running time of input applications. Finally, these mapping algorithms are integrated in a compiler framework to construct a compiler for CGRADissertation/ThesisDoctoral Dissertation Computer Science 201

    Cooperative agent-based SANET architecture for personalised healthcare monitoring

    Full text link
    The application of an software agent-based computational technique that implements Extended Kohonen Maps (EKMs) for the management of Sensor-Actuator networks (SANETs) in health-care facilities. The agent-based model incorporates the BDI (Belief-Desire-Intention) Agent paradigms by Georgeff et al. EKMs perform the quantitative analysis of an algorithmic artificial neural network process by using an indirect-mapping EKM to self-organize. Current results show a combinatorial approach to optimization with EKMs provides an improvement in event trajectory estimation compared to standalone cooperative EKM processes to allow responsive event detection for patient monitoring scenarios. This will allow healthcare professionals to focus less on administrative tasks, and more on improving patient needs, particularly with people who are in need for dedicated care and round-the-clock monitoring. ©2010 IEEE

    Self-Organized Hybrid Wireless Sensor Network for Finding Randomly Moving Target in Unknown Environment

    Get PDF
    Unknown target search, in an unknown environment, is a complex problem in Wireless Sensor Network (WSN). It does not have a linear solution when target’s location and searching space is unknown. For the past few years, many researchers have invented novel techniques for finding a target using either Static Sensor Node (SSN) or Mobile Sensor Node (MSN) in WSN i.e. Hybrid WSN. But there is a lack of research to find a solution using hybrid WSN. In the current research, the problem has been addressed mostly using non-biological techniques. Due to its complexity and having a non-linear solution, Bio-inspired techniques are most suited to solve the problem. This paper proposes a solution for searching of randomly moving target in unknown area using only Mobile sensor nodes and combination of both Static and Mobile sensor nodes. In proposed technique coverage area is determined and compared. To perform the work, novel algorithms like MSNs Movement Prediction Algorithm (MMPA), Leader Selection Algorithm (LSA), Leader’s Movement Prediction Algorithm (LMPA) and follower algorithm are implemented. Simulation results validate the effectiveness of proposed work. Through the result, it is shown that proposed hybrid WSN approach with less number of sensor nodes (combination of Static and Mobile sensor nodes) finds target faster than only MSN approach

    Novel Internet of Vehicles Approaches for Smart Cities

    Get PDF
    Smart cities are the domain where many electronic devices and sensors transmit data via the Internet of Vehicles concept. The purpose of deploying many sensors in cities is to provide an intelligent environment and a good quality of life. However, different challenges still appear in smart cities such as vehicular traffic congestion, air pollution, and wireless channel communication aspects. Therefore, in order to address these challenges, this thesis develops approaches for vehicular routing, wireless channel congestion alleviation, and traffic estimation. A new traffic congestion avoidance approach has been developed in this thesis based on the simulated annealing and TOPSIS cost function. This approach utilizes data such as the traffic average travel speed from the Internet of Vehicles. Simulation results show that the developed approach improves the traffic performance for the Sheffield the scenario in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms. In contrast, transmitting a large amount of data among the sensors leads to a wireless channel congestion problem. This affects the accuracy of transmitted information due to the packets loss and delays time. This thesis proposes two approaches based on a non-cooperative game theory to alleviate the channel congestion problem. Therefore, the congestion control problem is formulated as a non-cooperative game. A proof of the existence of a unique Nash equilibrium is given. The performance of the proposed approaches is evaluated on the highway and urban testing scenarios. This thesis also addresses the problem of missing data when sensors are not available or when the Internet of Vehicles connection fails to provide measurements in smart cities. Two approaches based on l1 norm minimization and a relevance vector machine type optimization are proposed. The performance of the developed approaches has been tested involving simulated and real data scenarios
    • …
    corecore