7 research outputs found

    Digital twin for machine-learning-based vehicle CO2 emissions concentration prediction in embedded system

    Get PDF
    In this paper, we describe the design, implementation, and installation of a digital twin version of a physical CO 2 monitoring system with the aim of democratizing access to affordable CO 2 emission measuring and enabling the creation of effective pollutant reduction strategies. The presented digital twin acts as a replacement that enables the measuring of CO 2 emissions without the use of a physical sensor. The exhibited work is specifically designed to be installed on a low-powered Micro Controller Unit (MCU), enabling its accessibility to a broader base of users. To this end, an optimized Artificial Neural Network (ANN) model was trained to be capable of predicting CO 2 emission concentrations with 87.15% accuracy when performing on the MCU. The ANN model is the result of a compound optimization technique that enhances the speed and accuracy of the model while reducing its computational complexity. The results outline that the implementation of the digital twin is 86.4% less expensive than its physical CO 2 counterpart, whilst still providing highly accurate and reliable data

    EFFICIENCY COMPARISON OF NETWORKS IN HANDWRITTEN LATIN CHARACTERS RECOGNITION WITH DIACRITICS

    Get PDF
    The aim of the article is to analyze and compare the performance and accuracy of architectures with a different number of parameters on the example of a set of handwritten Latin characters from the Polish Handwritten Characters Database (PHCD). It is a database of handwriting scans containing letters of the Latin alphabet as well as diacritics characteristic of the Polish language. Each class in the PHCD dataset contains 6,000 scans for each character. The research was carried out on six proposed architectures and compared with the architecture from the literature. Each of the models was trained for 50 epochs, and then the accuracy of prediction was measured on a separate test set. The experiment thus constructed was repeated 20 times for each model. Accuracy, number of parameters and number of floating-point operations performed by the network were compared. The research was conducted on subsets such as uppercase letters, lowercase letters, lowercase letters with diacritics, and a subset of all available characters. The relationship between the number of parameters and the accuracy of the model was indicated. Among the examined architectures, those that significantly improved the prediction accuracy at the expense of a larger network size were selected, and a network with a similar prediction accuracy as the base one, but with twice as many model parameters was selected

    Development of the Abnormal Tension Pattern Recognition Module for Twisted Yarn Based on Deep Learning Edge Computing

    Get PDF
    This study aims to develop an artificial intelligence module for recognizing abnormal tension in textile weaving, The module can be used to address the time-consuming and inaccurate issues associated with traditional manual methods. Long short-term memory (LSTM) recurrent neural networks as the algorithm for identifying different types of abnormal tension are employed in this module. This study focuses on training and validating the model using five common patterns. Additionally, an approach involving the integration of plug-in modules and edge computing in deep learning is employed to achieve the research objectives without altering the original system architecture. Multiple experiments were conducted to search for the optimal model parameters. According to the experimental results, the average recognition rate for abnormal tension is 97.12%, with an average computation time of 46.2 milliseconds per sample. The results indicate that the recognition accuracy and computation time meet the practical performance requirements of the system

    Architecture for Enabling Edge Inference via Model Transfer from Cloud Domain in a Kubernetes Environment

    Get PDF
    The current approaches for energy consumption optimisation in buildings are mainly reactive or focus on scheduling of daily/weekly operation modes in heating. Machine Learning (ML)-based advanced control methods have been demonstrated to improve energy efficiency when compared to these traditional methods. However, placing of ML-based models close to the buildings is not straightforward. Firstly, edge-devices typically have lower capabilities in terms of processing power, memory, and storage, which may limit execution of ML-based inference at the edge. Secondly, associated building information should be kept private. Thirdly, network access may be limited for serving a large number of edge devices. The contribution of this paper is an architecture, which enables training of ML-based models for energy consumption prediction in private cloud domain, and transfer of the models to edge nodes for prediction in Kubernetes environment. Additionally, predictors at the edge nodes can be automatically updated without interrupting operation. Performance results with sensor-based devices (Raspberry Pi 4 and Jetson Nano) indicated that a satisfactory prediction latency (~7–9 s) can be achieved within the research context. However, model switching led to an increase in prediction latency (~9–13 s). Partial evaluation of a Reference Architecture for edge computing systems, which was used as a starting point for architecture design, may be considered as an additional contribution of the paper

    Experimental implementation of a neural network optical channel equalizer in restricted hardware using pruning and quantization

    Get PDF
    The deployment of artificial neural networks-based optical channel equalizers on edge-computing devices is critically important for the next generation of optical communication systems. However, this is still a highly challenging problem, mainly due to the computational complexity of the artificial neural networks (NNs) required for the efficient equalization of nonlinear optical channels with large dispersion-induced memory. To implement the NN-based optical channel equalizer in hardware, a substantial complexity reduction is needed, while we have to keep an acceptable performance level of the simplified NN model. In this work, we address the complexity reduction problem by applying pruning and quantization techniques to an NN-based optical channel equalizer. We use an exemplary NN architecture, the multi-layer perceptron (MLP), to mitigate the impairments for 30 GBd 1000 km transmission over a standard single-mode fiber, and demonstrate that it is feasible to reduce the equalizer's memory by up to 87.12%, and its complexity by up to 78.34%, without noticeable performance degradation. In addition to this, we accurately define the computational complexity of a compressed NN-based equalizer in the digital signal processing (DSP) sense. Further, we examine the impact of using hardware with different CPU and GPU features on the power consumption and latency for the compressed equalizer. We also verify the developed technique experimentally, by implementing the reduced NN equalizer on two standard edge-computing hardware units: Raspberry Pi 4 and Nvidia Jetson Nano, which are used to process the data generated via simulating the signal's propagation down the optical-fiber system
    corecore