10,764 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design

    Get PDF
    The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application

    Analyzing Energy-efficiency and Route-selection of Multi-level Hierarchal Routing Protocols in WSNs

    Full text link
    The advent and development in the field of Wireless Sensor Networks (WSNs) in recent years has seen the growth of extremely small and low-cost sensors that possess sensing, signal processing and wireless communication capabilities. These sensors can be expended at a much lower cost and are capable of detecting conditions such as temperature, sound, security or any other system. A good protocol design should be able to scale well both in energy heterogeneous and homogeneous environment, meet the demands of different application scenarios and guarantee reliability. On this basis, we have compared six different protocols of different scenarios which are presenting their own schemes of energy minimizing, clustering and route selection in order to have more effective communication. This research is motivated to have an insight that which of the under consideration protocols suit well in which application and can be a guide-line for the design of a more robust and efficient protocol. MATLAB simulations are performed to analyze and compare the performance of LEACH, multi-level hierarchal LEACH and multihop LEACH.Comment: NGWMN with 7th IEEE Inter- national Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA 2012), Victoria, Canada, 201

    Gravitational Clustering: A Simple, Robust and Adaptive Approach for Distributed Networks

    Full text link
    Distributed signal processing for wireless sensor networks enables that different devices cooperate to solve different signal processing tasks. A crucial first step is to answer the question: who observes what? Recently, several distributed algorithms have been proposed, which frame the signal/object labelling problem in terms of cluster analysis after extracting source-specific features, however, the number of clusters is assumed to be known. We propose a new method called Gravitational Clustering (GC) to adaptively estimate the time-varying number of clusters based on a set of feature vectors. The key idea is to exploit the physical principle of gravitational force between mass units: streaming-in feature vectors are considered as mass units of fixed position in the feature space, around which mobile mass units are injected at each time instant. The cluster enumeration exploits the fact that the highest attraction on the mobile mass units is exerted by regions with a high density of feature vectors, i.e., gravitational clusters. By sharing estimates among neighboring nodes via a diffusion-adaptation scheme, cooperative and distributed cluster enumeration is achieved. Numerical experiments concerning robustness against outliers, convergence and computational complexity are conducted. The application in a distributed cooperative multi-view camera network illustrates the applicability to real-world problems.Comment: 12 pages, 9 figure

    A survey on subjecting electronic product code and non-ID objects to IP identification

    Full text link
    Over the last decade, both research on the Internet of Things (IoT) and real-world IoT applications have grown exponentially. The IoT provides us with smarter cities, intelligent homes, and generally more comfortable lives. However, the introduction of these devices has led to several new challenges that must be addressed. One of the critical challenges facing interacting with IoT devices is to address billions of devices (things) around the world, including computers, tablets, smartphones, wearable devices, sensors, and embedded computers, and so on. This article provides a survey on subjecting Electronic Product Code and non-ID objects to IP identification for IoT devices, including their advantages and disadvantages thereof. Different metrics are here proposed and used for evaluating these methods. In particular, the main methods are evaluated in terms of their: (i) computational overhead, (ii) scalability, (iii) adaptability, (iv) implementation cost, and (v) whether applicable to already ID-based objects and presented in tabular format. Finally, the article proves that this field of research will still be ongoing, but any new technique must favorably offer the mentioned five evaluative parameters.Comment: 112 references, 8 figures, 6 tables, Journal of Engineering Reports, Wiley, 2020 (Open Access
    corecore