7 research outputs found
BioSec: A biometric authentication framework for secure and private communication among edge devices in IoT and industry 4.0
With the rapid increase in the usage areas of Internet of Things (IoT) devices, it brings challenges such as security and privacy. One way to ensure these in IoT-based systems is user authentication. Until today, user authentication is provided by traditional methods such as pin and token based. But traditional methods have challenges such as forgotten, stolen, and shared with another user who is unauthorized. To address these challenges, we proposed a biometric method called BioSec to provide authentication in IoT integrated with edge consumer electronics using fingerprint authentication. Further, we ensured the security of biometric data both in the transmission channel and database with the standard encryption method. BioSec ensures secure and private communication among edge devices in IoT and Industry 4.0. Finally, we have compared three encryption methods used to protect biometric templates in terms of processing times and identified that AES-128-bit key encryption method outperforms others
BlockFaaS: Blockchain-enabled serverless computing framework for AI-driven IoT healthcare applications
With the development of new sensor technologies, Internet of Things (IoT)-based healthcare applications have gained momentum in recent years. However, IoT devices have limited resources, making them incapable of executing large computational operations. To solve this problem, the serverless paradigm, with its advantages such as dynamic scalability and infrastructure management, can be used to support the requirements of IoT-based applications. However, due to the heterogeneous structure of IoT, user trust must also be taken into account when providing this integration. This problem can be overcome by using a Blockchain that guarantees data immutability and ensures that any data generated by the IoT device is not modified. This paper proposes a BlockFaaS framework that supports dynamic scalability and guarantees security and privacy by integrating a serverless platform and Blockchain architecture into latency-sensitive Artificial Intelligence (AI)-based healthcare applications. To do this, we deployed the AIBLOCK framework, which guarantees data immutability in smart healthcare applications, into HealthFaaS, a serverless-based framework for heart disease risk detection. To expand this framework, we used high-performance AI models and a more efficient Blockchain module. We use the Transport Layer Security (TLS) protocol in all communication channels to ensure privacy within the framework. To validate the proposed framework, we compare its performance with the HealthFaaS and AIBLOCK frameworks. The results show that BlockFaaS outperforms HealthFaaS with an AUC of 4.79% and consumes 162.82 millijoules less energy on the Blockchain module than AIBLOCK. Additionally, the cold start latency value occurring in Google Cloud Platform, the serverless platform into which BlockFaaS is integrated, and the factors affecting this value are examined
EdgeAISim: A toolkit for simulation and modelling of AI models in edge computing environments
To meet next-generation Internet of Things (IoT) application demands, edge computing moves processing power and storage closer to the network edge to minimize latency and bandwidth utilization. Edge computing is becoming increasingly popular as a result of these benefits, but it comes with challenges such as managing resources efficiently. Researchers are utilising Artificial Intelligence (AI) models to solve the challenge of resource management in edge computing systems. However, existing simulation tools are only concerned with typical resource management policies, not the adoption and implementation of AI models for resource management, especially. Consequently, researchers continue to face significant challenges, making it hard and time-consuming to use AI models when designing novel resource management policies for edge computing with existing simulation tools. To overcome these issues, we propose a lightweight Python-based toolkit called EdgeAISim for the simulation and modelling of AI models for designing resource management policies in edge computing environments. In EdgeAISim, we extended the basic components of the EdgeSimPy framework and developed new AI-based simulation models for task scheduling, energy management, service migration, network flow scheduling, and mobility support for edge computing environments. In EdgeAISim, we have utilized advanced AI models such as Multi-Armed Bandit with Upper Confidence Bound, Deep Q-Networks, Deep Q-Networks with Graphical Neural Network, and Actor-Critic Network to optimize power usage while efficiently managing task migration within the edge computing environment. The performance of these proposed models of EdgeAISim is compared with the baseline, which uses a worst-fit algorithm-based resource management policy in different settings. Experimental results indicate that EdgeAISim exhibits a substantial reduction in power consumption, highlighting the compelling success of power optimization strategies in EdgeAISim. The development of EdgeAISim represents a promising step towards sustainable edge computing, providing eco-friendly and energy-efficient solutions that facilitate efficient task management in edge environments for different large-scale scenarios
Recommended from our members
ATOM: AI-Powered Sustainable Resource Management for Serverless Edge Computing Environments
Serverless edge computing decreases unnecessary resource usage on end devices with limited processing power and storage capacity. Despite its benefits, serverless edge computing’s zero scalability is the major source of the cold start delay, which is yet unsolved. This latency is unacceptable for time-sensitive Internet of Things (IoT) applications like autonomous cars. Most existing approaches need containers to idle and use extra computing resources. Edge devices have fewer resources than cloud-based systems, requiring new sustainable solutions. Therefore, we propose an AI-powered, sustainable resource management framework called ATOM for serverless edge computing. ATOM utilizes a deep reinforcement learning model to predict exactly when cold start latency will happen. We create a cold start dataset using a heart disease risk scenario and deploy using Google Cloud Functions. To demonstrate the superiority of ATOM, its performance is compared with two different baselines, which use the warm-start containers and a two-layer adaptive approach. The experimental results showed that although the ATOM required more calculation time of 118.76 seconds, it performed better in predicting cold start than baseline models with an RMSE ratio of 148.76. Additionally, the energy consumption and CO 2 emission amount of these models are evaluated and compared for the training and prediction phases
CoviDetector: A transfer learning-based semi supervised approach to detect Covid-19 using CXR images
COVID-19 was one of the deadliest and most infectious illnesses of this century. Research has been done to decrease pandemic deaths and slow down its spread. COVID-19 detection investigations have utilised Chest X-ray (CXR) images with deep learning techniques with its sensitivity in identifying pneumonic alterations. However, CXR images are not publicly available due to users’ privacy concerns, resulting in a challenge to train a highly accurate deep learning model from scratch. Therefore, we proposed CoviDetector, a new semi-supervised approach based on transfer learning and clustering, which displays improved performance and requires less training data. CXR images are given as input to this model, and individuals are categorised into three classes: (1) COVID-19 positive; (2) Viral pneumonia; and (3) Normal. The performance of CoviDetector has been evaluated on four different datasets, achieving over 99% accuracy on them. Additionally, we generate heatmaps utilising Grad-CAM and overlay them on the CXR images to present the highlighted areas that were deciding factors in detecting COVID-19. Finally, we developed an Android app to offer a user-friendly interface. We release the code, datasets and results’ scripts of CoviDetector for reproducibility purposes; they are available at: https://github.com/dasanik2001/CoviDetecto
AI-based fog and edge computing:a systematic review, taxonomy and future directions
Resource management in computing is a very challenging problem that involves making sequential decisions. Resource limitations, resource heterogeneity, dynamic and diverse nature of workload, and the unpredictability of fog/edge computing environments have made resource management even more challenging to be considered in the fog landscape. Recently Artificial Intelligence (AI) and Machine Learning (ML) based solutions are adopted to solve this problem. AI/ML methods with the capability to make sequential decisions like reinforcement learning seem most promising for these type of problems. But these algorithms come with their own challenges such as high variance, explainability, and online training. The continuously changing fog/edge environment dynamics require solutions that learn online, adopting changing computing environment. In this paper, we used standard review methodology to conduct this Systematic Literature Review (SLR) to analyze the role of AI/ML algorithms and the challenges in the applicability of these algorithms for resource management in fog/edge computing environments. Further, various machine learning, deep learning and reinforcement learning techniques for edge AI management have been discussed. Furthermore, we have presented the background and current status of AI/ML-based Fog/Edge Computing. Moreover, a taxonomy of AI/ML-based resource management techniques for fog/edge computing has been proposed and compared the existing techniques based on the proposed taxonomy. Finally, open challenges and promising future research directions have been identified and discussed in the area of AI/ML-based fog/edge computing
AI-based fog and edge computing:a systematic review, taxonomy and future directions
Resource management in computing is a very challenging problem that involves making sequential decisions. Resource limitations, resource heterogeneity, dynamic and diverse nature of workload, and the unpredictability of fog/edge computing environments have made resource management even more challenging to be considered in the fog landscape. Recently Artificial Intelligence (AI) and Machine Learning (ML) based solutions are adopted to solve this problem. AI/ML methods with the capability to make sequential decisions like reinforcement learning seem most promising for these type of problems. But these algorithms come with their own challenges such as high variance, explainability, and online training. The continuously changing fog/edge environment dynamics require solutions that learn online, adopting changing computing environment. In this paper, we used standard review methodology to conduct this Systematic Literature Review (SLR) to analyze the role of AI/ML algorithms and the challenges in the applicability of these algorithms for resource management in fog/edge computing environments. Further, various machine learning, deep learning and reinforcement learning techniques for edge AI management have been discussed. Furthermore, we have presented the background and current status of AI/ML-based Fog/Edge Computing. Moreover, a taxonomy of AI/ML-based resource management techniques for fog/edge computing has been proposed and compared the existing techniques based on the proposed taxonomy. Finally, open challenges and promising future research directions have been identified and discussed in the area of AI/ML-based fog/edge computing