16 research outputs found

    A Federated Filtering Framework for Internet of Medical Things

    Full text link
    Based on the dominant paradigm, all the wearable IoT devices used in the healthcare sector also known as the internet of medical things (IoMT) are resource constrained in power and computational capabilities. The IoMT devices are continuously pushing their readings to the remote cloud servers for real-time data analytics, that causes faster drainage of the device battery. Moreover, other demerits of continuous centralizing of data include exposed privacy and high latency. This paper presents a novel Federated Filtering Framework for IoMT devices which is based on the prediction of data at the central fog server using shared models provided by the local IoMT devices. The fog server performs model averaging to predict the aggregated data matrix and also computes filter parameters for local IoMT devices. Two significant theoretical contributions of this paper are the global tolerable perturbation error (TolF{To{l_F}}) and the local filtering parameter (δ\delta); where the former controls the decision-making accuracy due to eigenvalue perturbation and the later balances the tradeoff between the communication overhead and perturbation error of the aggregated data matrix (predicted matrix) at the fog server. Experimental evaluation based on real healthcare data demonstrates that the proposed scheme saves upto 95\% of the communication cost while maintaining reasonable data privacy and low latency.Comment: 6 pages, 6 Figures, accepted for oral presentation in IEEE ICC 2019, Internet of Things, Federated Learning and Perturbation theor

    Heterogeneous data reduction in WSN: Application to Smart Grids

    Get PDF
    International audienceThe transformation of existing power grids into Smart Grids (SGs) aims to facilitate grid energy automation for a better quality of service by providing fault tolerance and integrating renewable energy resources in the power market. This evolution towards a smarter electricity grid requires the ability to transmit in real time a maximum of data on the network usage. A Wireless Sensor Network (WSN) distributed across the power grid is a promising solution, given the reduced cost and ease of deployment of such networks. These advantages come up against the unstable radio links and limited resources of WSN. In order to reduce the amount of data sent over the network, and thus reduce energy consumption, data prediction is a potent solution of data reduction. It consists on predicting the values sensed by sensor nodes within certain error threshold, and resides both at the sensors and at the sink. The raw data is sent only if the desired accuracy is not satisfied, thereby reducing data transmission. We focus on time series estimation with Least Mean Square (LMS) for data prediction in WSN, in a Smart Grid context, where several applications with different data types and Quality of Service (QoS) requirements will exist on the same network. LMS proved its simplicity and robustness for a wide variety of applications, but the parameters selection (step size and filter length) can directly affect its global performance, choosing the right ones is then crucial. Having no clear and robust method on how to optimize these parameters for a variety of applications, we propose a modification of the original LMS that consists of training the filter for a certain time with the data itself in order to customize the aforementioned parameters. We consider different types of real data traces for the photo voltaic cells monitoring. Our simulation results provide a better data prediction while minimizing the mean square error compared to an existing solution in literatur

    Optimization of Adaptive Method for Data Reduction in Wireless Sensor Networks

    Get PDF
    The term ‘Wireless’ is a cordless technology where the nodes interact or exchange information with the sink node without wired intervention to exchange or transmit any information successfully. Characteristics of the present wireless sensor networks are applied to diverse technological furtherance in minimum power communications and very large-scale integration to sustained functionalities of sensing. Tremendous number of incentive observation and algometry of data are amassed from sensors in Wireless Sensor Networks (WSNs) for the Internet of Things (IoT) applications such as environmental monitoring. However, continuous dissemination of the sensed data postulates eminent energy imbibing. Data reduction duress the sensor nodes to surcease transmitting the data when it is diffident about freshen up. One way to reduce this kind of energy imbibing is to minimize the amount of data exchanged across the sensors, therefore the research work aims to increase the communication and spatial prediction between the sensor nodes and the sink nodes. In this research work, an Optimization of Adaptive Method for Data Reduction in Wireless Sensor Networks was proposed and implemented. The work adopted a bulging haplotype of two decoupled Least-Mean-Square (LMS) windowed filters with varying length for approximating the immediate metrics values both at the sink and source node such that sensor nodes have to send only their next sensed values that diverse substantially (when a pre-determine threshold) from the anticipated values. The experiment conducted on a real- world dataset of about 2,313,682, which were collected from 54 Mica2dot sensors thus, MATLAB was used as a tool for the implementation. The research work aims to increase the communication model and spatial prediction, which is the limitation of the base paper. The results show that our approach (OAM-DR) has achieved up to 98% communication reduction while retaining or carrying a high accuracy, (i.e. the anticipated values have a digression of ±0.5 from actual data values)

    An adaptive method for data reduction in the Internet of Things

    Get PDF
    Enormous amounts of dynamic observation and measurement data are collected from sensors in Wireless Sensor Networks (WSNs) for the Internet of Things (IoT) applications such as environmental monitoring. However, continuous transmission of the sensed data requires high energy consumption. Data transmission between sensor nodes and cluster heads (sink nodes) consumes much higher energy than data sensing in WSNs. One way of reducing such energy consumption is to minimise the number of data transmissions. In this paper, we propose an Adaptive Method for Data Reduction (AM-DR). Our method is based on a convex combination of two decoupled Least-Mean-Square (LMS) windowed filters with differing sizes for estimating the next measured values both at the source and the sink node such that sensor nodes have to transmit only their immediate sensed values that deviate significantly (with a pre-defined threshold) from the predicted values. The conducted experiments on a real-world data show that our approach has been able to achieve up to 95% communication reduction while retaining a high accuracy (i.e. predicted values have a deviation of ±0.5 from real data values)

    LINT: Accuracy-adaptive and Lightweight In-band Network Telemetry

    Get PDF
    International audienceIn-band Network Telemetry (INT) has recently emerged as a means of achieving per-packet near real-time visibility into the network. INT capable network devices can directly embed device internal state such as packet processing time, queue occupancy and link utilization information in each passing packet. INT is enabling new network monitoring applications and is currently being used in production for providing fine-grained feedback to congestion control mechanisms. The microscopic network visibility facilitated by INT comes at the expense of increased data plane overhead. INT piggybacks telemetry information on user data traffic and can significantly increase packet size. A direct consequence of increasing packet size for carrying telemetry data is a substantial drop in network goodput. This paper aims at striking a balance between reducing INT data plane overhead and the accuracy of network view constructed from telemetry data. To this end, we propose LINT, an accuracy-adaptive and Lightweight INT mechanism that can be implemented on commodity programmable devices. Our evaluation of LINT using real network traces on a fat tree topology demonstrates that LINT can reduce INT data plane overhead by ≈25% while ensuring more than 0.9 recall for monitoring queries trying to identify congested flows and switches in the network

    Classifier-Based Data Transmission Reduction in Wearable Sensor Network for Human Activity Monitoring

    Get PDF
    The recent development of wireless wearable sensor networks offers a spectrum of new applications in fields of healthcare, medicine, activity monitoring, sport, safety, human-machine interfacing, and beyond. Successful use of this technology depends on lifetime of the battery-powered sensor nodes. This paper presents a new method for extending the lifetime of the wearable sensor networks by avoiding unnecessary data transmissions. The introduced method is based on embedded classifiers that allow sensor nodes to decide if current sensor readings have to be transmitted to cluster head or not. In order to train the classifiers, a procedure was elaborated, which takes into account the impact of data selection on accuracy of a recognition system. This approach was implemented in a prototype of wearable sensor network for human activity monitoring. Real-world experiments were conducted to evaluate the new method in terms of network lifetime, energy consumption, and accuracy of human activity recognition. Results of the experimental evaluation have confirmed that, the proposed method enables significant prolongation of the network lifetime, while preserving high accuracy of the activity recognition. The experiments have also revealed advantages of the method in comparison with state-of-the-art algorithms for data transmission reduction

    Applications of Prediction Approaches in Wireless Sensor Networks

    Get PDF
    Wireless Sensor Networks (WSNs) collect data and continuously monitor ambient data such as temperature, humidity and light. The continuous data transmission of energy constrained sensor nodes is a challenge to the lifetime and performance of WSNs. The type of deployment environment is also and the network topology also contributes to the depletion of nodes which threatens the lifetime and the also the performance of the network. To overcome these challenges, a number of approaches have been proposed and implemented. Of these approaches are routing, clustering, prediction, and duty cycling. Prediction approaches may be used to schedule the sleep periods of nodes to improve the lifetime. The chapter discusses WSN deployment environment, energy conservation techniques, mobility in WSN, prediction approaches and their applications in scheduling the sleep/wake-up periods of sensor nodes
    corecore