Deployment of Facial Recognition Models at the Edge: a Feasibility Study

Abstract

Model training and inference in Artificial Intelligence (AI) applications are typically performed in the cloud. There is a paradigm shift in moving AI closer to the edge, allowing for IoT devices to perform AI function onboard without incurring network latency. With the exponential increase of edge devices and data generated, capabilities of cloud computing would eventually be limited by the bandwidth and latency of the network. To mitigate the potential risks posed by cloud computing, this paper discusses the feasibility of deploying inference onboard the device where data is being generated. A secure access management system using MobileNet facial recognition was implemented and the preliminary results showed that the deployment at the edge outperformed the cloud deployment in terms of overall response speed while maintaining the same recognition accuracy. Thus, management of the automated deployment of inference models at the edge is required

    Similar works