98 research outputs found

    Energy and relevance-aware adaptive monitoring method for wireless sensor nodes with hard energy constraints

    Get PDF
    © 2024 Elsevier. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Traditional dynamic energy management methods optimize the energy usage in wireless sensor nodes adjusting their behavior to the operating conditions. However, this comes at the cost of losing the predictability in the operation of the sensor nodes. This loss of predictability is particularly problematic for the battery life, as it determines when the nodes need to be serviced. In this paper, we propose an energy and relevance-aware monitoring method, which leverages the principles of self-awareness to address this challenge. On one hand, the relevance-aware behavior optimizes how the monitoring efforts are allocated to maximize the monitoring accuracy; while on the other hand, the power-aware behavior adjusts the overall energy consumption of the node to achieve the target battery life. The proposed method is able to balance both behaviors so as to achieve the target battery life, at the same time is able to exploit variations in the collected data to maximize the monitoring accuracy. Furthermore, the proposed method coordinates two different adaptive schemes, a dynamic sampling period scheme, and a dual prediction scheme, to adjust the behavior of the sensor node. The evaluation results show that the proposed method consistently meets its battery lifetime goal, even when the operating conditions are artificially changed, and is able to improve the mean square error of the collected signal by up to 20% with respect to the same method with the relevance-aware behavior disabled, and of up to 16% with respect the same algorithm with just the adaptive sampling period or the dual prediction scheme enabled. Consequently showing the ability of the proposed method of making appropriate decisions to balance the competing interest of its two behaviors and coordinate the two adaptive schemes to improve their performance.This study was supported by the Agència de Gestió d’Ajuts Universitaris i de Recerca (AGAUR 2019 DI 075 to David Arnaiz). The founder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Peer ReviewedPostprint (published version

    K-Predictions Based Data Reduction Approach in WSN for Smart Agriculture

    Get PDF
    International audienceNowadays, climate change is one of the numerous factors affecting the agricultural sector. Optimising the usage of natural resources is one of the challenges this sector faces. For this reason, it could be necessary to locally monitor weather data and soil conditions to make faster and better decisions locally adapted to the crop. Wireless sensor networks (WSNs) can serve as a monitoring system for these types of parameters. However, in WSNs, sensor nodes suffer from limited energy resources. The process of sending a large amount of data from the nodes to the sink results in high energy consumption at the sensor node and significant use of network bandwidth, which reduces the lifetime of the overall network and increases the number of costly interference. Data reduction is one of the solutions for this kind of challenges. In this paper, data correlation is investigated and combined with a data prediction technique in order to avoid sending data that could be retrieved mathematically in the objective to reduce the energy consumed by sensor nodes and the bandwidth occupation. This data reduction technique relies on the observation of the variation of every monitored parameter as well as the degree of correlation between different parameters. This approach is validated through simulations on MATLAB using real meteorological data-sets from Weather-Underground sensor network. The results show the validity of our approach which reduces the amount of data by a percentage up to 88% while maintaining the accuracy of the information having a standard deviation of 2 degrees for the temperature and 7% for the humidity

    Performance analysis of a two-level polling control system based on LSTM and attention mechanism for wireless sensor networks

    Get PDF
    A continuous-time exhaustive-limited (K = 2) two-level polling control system is proposed to address the needs of increasing network scale, service volume and network performance prediction in the Internet of Things (IoT) and the Long Short-Term Memory (LSTM) network and an attention mechanism is used for its predictive analysis. First, the central site uses the exhaustive service policy and the common site uses the Limited K = 2 service policy to establish a continuous-time exhaustive-limited (K = 2) two-level polling control system. Second, the exact expressions for the average queue length, average delay and cycle period are derived using probability generating functions and Markov chains and the MATLAB simulation experiment. Finally, the LSTM neural network and an attention mechanism model is constructed for prediction. The experimental results show that the theoretical and simulated values basically match, verifying the rationality of the theoretical analysis. Not only does it differentiate priorities to ensure that the central site receives a quality service and to ensure fairness to the common site, but it also improves performance by 7.3 and 12.2%, respectively, compared with the one-level exhaustive service and the one-level limited K = 2 service; compared with the two-level gated- exhaustive service model, the central site length and delay of this model are smaller than the length and delay of the gated- exhaustive service, indicating a higher priority for this model. Compared with the exhaustive-limited K = 1 two-level model, it increases the number of information packets sent at once and has better latency performance, providing a stable and reliable guarantee for wireless network services with high latency requirements. Following on from this, a fast evaluation method is proposed: Neural network prediction, which can accurately predict system performance as the system size increases and simplify calculations

    A Survey on UAV-Aided Maritime Communications: Deployment Considerations, Applications, and Future Challenges

    Full text link
    Maritime activities represent a major domain of economic growth with several emerging maritime Internet of Things use cases, such as smart ports, autonomous navigation, and ocean monitoring systems. The major enabler for this exciting ecosystem is the provision of broadband, low-delay, and reliable wireless coverage to the ever-increasing number of vessels, buoys, platforms, sensors, and actuators. Towards this end, the integration of unmanned aerial vehicles (UAVs) in maritime communications introduces an aerial dimension to wireless connectivity going above and beyond current deployments, which are mainly relying on shore-based base stations with limited coverage and satellite links with high latency. Considering the potential of UAV-aided wireless communications, this survey presents the state-of-the-art in UAV-aided maritime communications, which, in general, are based on both conventional optimization and machine-learning-aided approaches. More specifically, relevant UAV-based network architectures are discussed together with the role of their building blocks. Then, physical-layer, resource management, and cloud/edge computing and caching UAV-aided solutions in maritime environments are discussed and grouped based on their performance targets. Moreover, as UAVs are characterized by flexible deployment with high re-positioning capabilities, studies on UAV trajectory optimization for maritime applications are thoroughly discussed. In addition, aiming at shedding light on the current status of real-world deployments, experimental studies on UAV-aided maritime communications are presented and implementation details are given. Finally, several important open issues in the area of UAV-aided maritime communications are given, related to the integration of sixth generation (6G) advancements

    Machine Learning Meets Communication Networks: Current Trends and Future Challenges

    Get PDF
    The growing network density and unprecedented increase in network traffic, caused by the massively expanding number of connected devices and online services, require intelligent network operations. Machine Learning (ML) has been applied in this regard in different types of networks and networking technologies to meet the requirements of future communicating devices and services. In this article, we provide a detailed account of current research on the application of ML in communication networks and shed light on future research challenges. Research on the application of ML in communication networks is described in: i) the three layers, i.e., physical, access, and network layers; and ii) novel computing and networking concepts such as Multi-access Edge Computing (MEC), Software Defined Networking (SDN), Network Functions Virtualization (NFV), and a brief overview of ML-based network security. Important future research challenges are identified and presented to help stir further research in key areas in this direction

    A Review of Indoor Millimeter Wave Device-based Localization and Device-free Sensing Technologies and Applications

    Full text link
    The commercial availability of low-cost millimeter wave (mmWave) communication and radar devices is starting to improve the penetration of such technologies in consumer markets, paving the way for large-scale and dense deployments in fifth-generation (5G)-and-beyond as well as 6G networks. At the same time, pervasive mmWave access will enable device localization and device-free sensing with unprecedented accuracy, especially with respect to sub-6 GHz commercial-grade devices. This paper surveys the state of the art in device-based localization and device-free sensing using mmWave communication and radar devices, with a focus on indoor deployments. We first overview key concepts about mmWave signal propagation and system design. Then, we provide a detailed account of approaches and algorithms for localization and sensing enabled by mmWaves. We consider several dimensions in our analysis, including the main objectives, techniques, and performance of each work, whether each research reached some degree of implementation, and which hardware platforms were used for this purpose. We conclude by discussing that better algorithms for consumer-grade devices, data fusion methods for dense deployments, as well as an educated application of machine learning methods are promising, relevant and timely research directions.Comment: 43 pages, 13 figures. Accepted in IEEE Communications Surveys & Tutorials (IEEE COMST

    Classifier-Based Data Transmission Reduction in Wearable Sensor Network for Human Activity Monitoring

    Get PDF
    The recent development of wireless wearable sensor networks offers a spectrum of new applications in fields of healthcare, medicine, activity monitoring, sport, safety, human-machine interfacing, and beyond. Successful use of this technology depends on lifetime of the battery-powered sensor nodes. This paper presents a new method for extending the lifetime of the wearable sensor networks by avoiding unnecessary data transmissions. The introduced method is based on embedded classifiers that allow sensor nodes to decide if current sensor readings have to be transmitted to cluster head or not. In order to train the classifiers, a procedure was elaborated, which takes into account the impact of data selection on accuracy of a recognition system. This approach was implemented in a prototype of wearable sensor network for human activity monitoring. Real-world experiments were conducted to evaluate the new method in terms of network lifetime, energy consumption, and accuracy of human activity recognition. Results of the experimental evaluation have confirmed that, the proposed method enables significant prolongation of the network lifetime, while preserving high accuracy of the activity recognition. The experiments have also revealed advantages of the method in comparison with state-of-the-art algorithms for data transmission reduction

    Advanced Signal Processing in Wearable Sensors for Health Monitoring

    Get PDF
    Smart, wearables devices on a miniature scale are becoming increasingly widely available, typically in the form of smart watches and other connected devices. Consequently, devices to assist in measurements such as electroencephalography (EEG), electrocardiogram (ECG), electromyography (EMG), blood pressure (BP), photoplethysmography (PPG), heart rhythm, respiration rate, apnoea, and motion detection are becoming more available, and play a significant role in healthcare monitoring. The industry is placing great emphasis on making these devices and technologies available on smart devices such as phones and watches. Such measurements are clinically and scientifically useful for real-time monitoring, long-term care, and diagnosis and therapeutic techniques. However, a pertaining issue is that recorded data are usually noisy, contain many artefacts, and are affected by external factors such as movements and physical conditions. In order to obtain accurate and meaningful indicators, the signal has to be processed and conditioned such that the measurements are accurate and free from noise and disturbances. In this context, many researchers have utilized recent technological advances in wearable sensors and signal processing to develop smart and accurate wearable devices for clinical applications. The processing and analysis of physiological signals is a key issue for these smart wearable devices. Consequently, ongoing work in this field of study includes research on filtration, quality checking, signal transformation and decomposition, feature extraction and, most recently, machine learning-based methods

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces
    • …
    corecore