211 research outputs found

    Intelligent TDMA heuristic scheduling by taking into account physical layer interference for an industrial IoT environment

    Get PDF
    In an Internet of Things environment, where multiple mobile devices are brought together, it is not always possible to serve all these devices simultaneously. We developed an intelligent Time Division Multiple Access (TDMA) scheduler which allows to plan the individual packets of the different streams in such a way that everyone can be served by taking into account the interference on the physical layer. The scheduler is applied in a realistic industrial environment and evaluated based on the maximum link latency, the channel occupancy, and the jitter. Two strategies are compared: one where the packets are sequentially allocated, and one periodically. Our results show that the periodically allocated strategy performs the best for the maximum link latency (for a packet size below 1200 bytes) and for the jitter. The channel occupancy is similar for both strategies. Furthermore, the performance can be improved by using a higher number of channels. Compared to classic Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA), the channel occupancy and the jitter are reduced up to 69.9 and 99.9%, respectively. Considering the maximum link latency, the proposed TDMA strategies perform significantly better than the worst case CSMA/CA (up to 99.8%), however, when assuming a best case CSMA/CA scenario, CSMA/CA performs better. Furthermore, we clearly show that there are cases where it is not possible to plan all streams when using CSMA/CA while this becomes feasible when applying the proposed TDMA strategies

    EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design

    Get PDF
    The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application

    Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey

    Full text link
    Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs

    Real-Time Sensor Networks and Systems for the Industrial IoT

    Get PDF
    The Industrial Internet of Things (Industrial IoT—IIoT) has emerged as the core construct behind the various cyber-physical systems constituting a principal dimension of the fourth Industrial Revolution. While initially born as the concept behind specific industrial applications of generic IoT technologies, for the optimization of operational efficiency in automation and control, it quickly enabled the achievement of the total convergence of Operational (OT) and Information Technologies (IT). The IIoT has now surpassed the traditional borders of automation and control functions in the process and manufacturing industry, shifting towards a wider domain of functions and industries, embraced under the dominant global initiatives and architectural frameworks of Industry 4.0 (or Industrie 4.0) in Germany, Industrial Internet in the US, Society 5.0 in Japan, and Made-in-China 2025 in China. As real-time embedded systems are quickly achieving ubiquity in everyday life and in industrial environments, and many processes already depend on real-time cyber-physical systems and embedded sensors, the integration of IoT with cognitive computing and real-time data exchange is essential for real-time analytics and realization of digital twins in smart environments and services under the various frameworks’ provisions. In this context, real-time sensor networks and systems for the Industrial IoT encompass multiple technologies and raise significant design, optimization, integration and exploitation challenges. The ten articles in this Special Issue describe advances in real-time sensor networks and systems that are significant enablers of the Industrial IoT paradigm. In the relevant landscape, the domain of wireless networking technologies is centrally positioned, as expected

    Designing Intelligent Energy Efficient Scheduling Algorithm To Support Massive IoT Communication In LoRa Networks

    Get PDF
    We are about to enter a new world with sixth sense ability – “Network as a sensor -6G”. The driving force behind digital sensing abilities is IoT. Due to their capacity to work in high frequency, 6G devices have voracious energy demand. Hence there is a growing need to work on green solutions to support the underlying 6G network by making it more energy efficient. Low cost, low energy, and long-range communication capability make LoRa the most adopted and promising network for IoT devices. Since LoRaWAN uses ALOHA for multi-access of channels, collision management is an important task. Moreover, in massive IoT, due to the increased number of devices and their Adhoc transmissions, collision becomes and concern. Furthermore, in long-range communication, such as in forests, agriculture, and remote locations, the IoT devices need to be powered using a battery and cannot be attached to an energy grid. LoRaWAN originally has a star network wherein IoT devices communicated to a single gateway. Massive IoT causes increased traffic at a single gateway. To address Massive IoT issues of collision and gateway load handling, we have designed a reinforcement learning-based scheduling algorithm, a Deep Deterministic policy gradient algorithm with channel activity detection (CAD) to optimize the energy efficiency of LoRaWAN in cross-layer architecture in massive IoT with star topology. We also design a CAD-based simulator for evaluating any algorithms with channel sensing. We compare energy efficiency, packet delivery ratio, latency, and signal strength with existing state of art algorithms and prove that our proposed solution is efficient for massive IoT LoRaWAN with star topology

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Scheduling UWB Ranging and Backbone Communications in a Pure Wireless Indoor Positioning System

    Get PDF
    International audienceIn this paper, we present and evaluate an ultra-wideband (UWB) indoor processing architecture that allows the performing of simultaneous localizations of mobile tags. This architecture relies on a network of low-power fixed anchors that provide forward-ranging measurements to a localization engine responsible for performing trilateration. The communications within this network are orchestrated by UWB-TSCH, an adaptation to the ultra-wideband (UWB) wireless technology of the time-slotted channel-hopping (TSCH) mode of IEEE 802.15.4. As a result of global synchronization, the architecture allows deterministic channel access and low power consumption. Moreover, it makes it possible to communicate concurrently over multiple frequency channels or using orthogonal preamble codes. To schedule communications in such a network, we designed a dedicated centralized scheduler inspired from the traffic aware scheduling algorithm (TASA). By organizing the anchors in multiple cells, the scheduler is able to perform simultaneous localizations and transmissions as long as the corresponding anchors are sufficiently far away to not interfere with each other. In our indoor positioning system (IPS), this is combined with dynamic registration of mobile tags to anchors, easing mobility, as no rescheduling is required. This approach makes our ultra-wideband (UWB) indoor positioning system (IPS) more scalable and reduces deployment costs since it does not require separate networks to perform ranging measurements and to forward them to the localization engine. We further improved our scheduling algorithm with support for multiple sinks and in-network data aggregation. We show, through simulations over large networks containing hundreds of cells, that high positioning rates can be achieved. Notably, we were able to fully schedule a 400-cell/400-tag network in less than 11 s in the worst case, and to create compact schedules which were up to 11 times shorter than otherwise with the use of aggregation, while also bounding queue sizes on anchors to support realistic use situations
    • …
    corecore