1,044 research outputs found

    Dynamic max-consensus with local self-tuning

    Get PDF
    This work describes a novel control protocol for multi-agent systems to solve the dynamic max-consensus problem. In this problem, each agent has access to an external timevarying scalar signal and has the objective to estimate and track the maximum among all these signals by exploiting only local communications. The main strength of the proposed protocol is that it is able to self-tune its internal parameters in order to achieve an arbitrary small steady-state error without significantly affecting the convergence time. We employ the proposed protocol in the context of distributed graph parameter estimations, such as size, diameter, and radius, and provide simulations in the scenario of open multi-agent systems. Copyright (C) 2022 The Authors

    Malicious node detection using machine learning and distributed data storage using blockchain in WSNs

    Get PDF
    In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE
    corecore