1,509 research outputs found

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Analyzing the energy efficient path in Wireless Sensor Network using Machine Learning

    Get PDF
    As the sensor nodes are energy constrained, an important factor for successful implementation of a Wireless Sensor Network (WSN) is designing energy efficient routing protocols and improving its lifetime. Network life time has been described in many ways such as   the time when the network lost its connectivity or the time when the first node gets disconnected. Whatever may be the description, the main focus of many researchers is to design algorithms that enable the network to perform continuously for a longer duration. So, improving the energy efficiency and increasing the network lifetime are the two key issues in WSN routing. Because of the intelligent nature and learning capacity, reinforcement learning (RL) algorithms are very suitable for complex distributed problems such as routing in WSN. RL is a subclass of Machine Learning techniques.  It can be used to choose the best forwarding node for transmitting data in multipath routing protocols. A survey has been made in this paper regarding the implementation of RL techniques to solve routing problems in WSN. Also, an algorithm has been proposed which is a modified version of original Directed Diffusion (DD) protocol. The proposed algorithm uses Q-learning technique which is a special class of RL. Also, the significance of balancing the exploration and exploitation rate during path finding in Q-learning has been demonstrated using an experiment implemented in python. The result of the experiment shows that if exploration-exploitation rate is properly balanced, it always yields an optimum value of the reward and thus path found from source to the destination is efficient

    A Study on Energy-Efficient Wireless Sensor Network based on Machine Learning Techniques

    Get PDF
    Wireless sensor networks (WSNs) are advantageous when there is no existing infrastructure (such as in military applications, emergency relief efforts, etc.) and it is necessary to develop a network at a low cost. A predetermined routing protocol or intrusion detection system is not available to Wireless Sensor Networks because they are dynamic by nature and need separating the network's nodes to do this. Because nodes in the majority of WSN applications are mobile and rely on battery capacity and the availability of restricted resources, energy consumption is an important research area for carrying out a variety of activities in WSNs. Self-learning algorithms that function without scripting or human involvement can be effectively used to report this problem depending on the applications need. This study investigated different ML-based WSN systems and exploring the ML techniques for energy efficiency along with some open issues

    3R: a reliable multi-agent reinforcement learning based routing protocol for wireless medical sensor networks.

    Get PDF
    Interest in the Wireless Medical Sensor Network (WMSN) is rapidly gaining attention thanks to recent advances in semiconductors and wireless communication. However, by virtue of the sensitive medical applications and the stringent resource constraints, there is a need to develop a routing protocol to fulfill WMSN requirements in terms of delivery reliability, attack resiliency, computational overhead and energy efficiency. This paper proposes 3R, a reliable multi-agent reinforcement learning routing protocol for WMSN. 3R uses a novel resource-conservative Reinforcement Learning (RL) model to reduce the computational overhead, along with two updating methods to speed up the algorithm convergence. The reward function is re-defined as a punishment, combining the proposed trust management system to defend against well-known dropping attacks. Furthermore, an energy model is integrated with the reward function to enhance the network lifetime and balance energy consumption across the network. The proposed energy model uses only local information to avoid the resource burdens and the security concerns of exchanging energy information. Experimental results prove the lightweightness, attacks resiliency and energy efficiency of 3R, making it a potential routing candidate for WMSN

    Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey

    Full text link
    Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs

    Latency and Lifetime Enhancements in Industrial Wireless Sensor Networks : A Q-Learning Approach for Graph Routing

    Get PDF
    Industrial wireless sensor networks usually have a centralized management approach, where a device known as network manager is responsible for the overall configuration, definition of routes, and allocation of communication resources. Graph routing is used to increase the reliability of communication through path redundancy. Some of the state-of-the-art graph-routing algorithms use weighted cost equations to define preferences on how the routes are constructed. The characteristics and requirements of these networks complicate to find a proper set of weight values to enhance network performance. Reinforcement learning can be useful to adjust these weights according to the current operating conditions of the network. In this article, we present the Q-learning reliable routing with a weighting agent approach, where an agent adjusts the weights of a state-of-the-art graph-routing algorithm. The states of the agent represent sets of weights, and the actions change the weights during network operation. Rewards are given to the agent when the average network latency decreases or the expected network lifetime increases. Simulations were conducted on a WirelessHART simulator considering industrial monitoring applications with random topologies. Results show, in most cases, a reduction of the average network latency while the expected network lifetime and the communication reliability are at least as good as what is obtained by the state-of-the-art graph-routing algorithms

    Sustainability Model for the Internet of Health Things (IoHT) Using Reinforcement Learning with Mobile Edge Secured Services

    Full text link
    [EN] In wireless multimedia networks, the Internet of Things (IoT) and visual sensors are used to interpret and exchange vast data in the form of images. The digital images are subsequently delivered to cloud systems via a sink node, where they are interacted with by smart communication systems using physical devices. Visual sensors are becoming a more significant part of digital systems and can help us live in a more intelligent world. However, for IoT-based data analytics, optimizing communications overhead by balancing the usage of energy and bandwidth resources is a new research challenge. Furthermore, protecting the IoT network's data from anonymous attackers is critical. As a result, utilizing machine learning, this study proposes a mobile edge computing model with a secured cloud (MEC-Seccloud) for a sustainable Internet of Health Things (IoHT), providing real-time quality of service (QoS) for big data analytics while maintaining the integrity of green technologies. We investigate a reinforcement learning optimization technique to enable sensor interaction by examining metaheuristic methods and optimally transferring health-related information with the interaction of mobile edges. Furthermore, two-phase encryptions are used to guarantee data concealment and to provide secured wireless connectivity with cloud networks. The proposed model has shown considerable performance for various network metrics compared with earlier studies.This work has been partially funded by the "La Fundacion para el Fomento de la Investigacion Sanitaria y Biomedica de la Comunitat Valenciana (Fisabio)" through the project PULSIDATA (A43). This research is supported by the Artificial Intelligence & Data Analytics Lab (AIDA), CCIS Prince Sultan University, Riyadh, Saudi Arabia. The authors are thankful for technical support.Rehman, A.; Saba, T.; Haseeb, K.; Alam, T.; Lloret, J. (2022). Sustainability Model for the Internet of Health Things (IoHT) Using Reinforcement Learning with Mobile Edge Secured Services. Sustainability. 14(19):1-14. https://doi.org/10.3390/su141912185114141
    corecore