5 research outputs found

    Edge-Enhanced QoS Aware Compression Learning for Sustainable Data Stream Analytics

    Full text link
    Existing Cloud systems involve large volumes of data streams being sent to a centralised data centre for monitoring, storage and analytics. However, migrating all the data to the cloud is often not feasible due to cost, privacy, and performance concerns. However, Machine Learning (ML) algorithms typically require significant computational resources, hence cannot be directly deployed on resource-constrained edge devices for learning and analytics. Edge-enhanced compressive offloading becomes a sustainable solution that allows data to be compressed at the edge and offloaded to the cloud for further analysis, reducing bandwidth consumption and communication latency. The design and implementation of a learning method for discovering compression techniques that offer the best QoS for an application is described. The approach uses a novel modularisation approach that maps features to models and classifies them for a range of Quality of Service (QoS) features. An automated QoS-aware orchestrator has been designed to select the best autoencoder model in real-time for compressive offloading in edge-enhanced clouds based on changing QoS requirements. The orchestrator has been designed to have diagnostic capabilities to search appropriate parameters that give the best compression. A key novelty of this work is harnessing the capabilities of autoencoders for edge-enhanced compressive offloading based on portable encodings, latent space splitting and fine-tuning network weights. Considering how the combination of features lead to different QoS models, the system is capable of processing a large number of user requests in a given time. The proposed hyperparameter search strategy (over the neural architectural space) reduces the computational cost of search through the entire space by up to 89%. When deployed on an edge-enhanced cloud using an Azure IoT testbed, the approach saves up to 70% data transfer costs and takes 32% less time for job completion. It eliminates the additional computational cost of decompression, thereby reducing the processing cost by up to 30%.</p

    Embedded data imputation for environmental intelligent sensing: A case study

    No full text
    Recent developments in cloud computing and the Internet of Things have enabled smart environments, in terms of both monitoring and actuation. Unfortunately, this often results in unsus-tainable cloud-based solutions, whereby, in the interest of simplicity, a wealth of raw (unprocessed) data are pushed from sensor nodes to the cloud. Herein, we advocate the use of machine learning at sensor nodes to perform essential data-cleaning operations, to avoid the transmission of corrupted (often unusable) data to the cloud. Starting from a public pollution dataset, we investigate how two machine learning techniques (kNN and missForest) may be embedded on Raspberry Pi to perform data imputation, without impacting the data collection process. Our experimental results demon-strate the accuracy and computational efficiency of edge-learning methods for filling in missing data values in corrupted data series. We find that kNN and missForest correctly impute up to 40% of randomly distributed missing values, with a density distribution of values that is indistinguishable from the benchmark. We also show a trade-off analysis for the case of bursty missing values, with recoverable blocks of up to 100 samples. Computation times are shorter than sampling periods, allowing for data imputation at the edge in a timely manner

    RES: Real-time Video Stream Analytics using Edge Enhanced Clouds

    Full text link
    IEEE With increasing availability and use of Internet of Things (IoT) devices large amounts of streaming data is now being produced at high velocity. Applications which require low latency response such as video surveillance demand a swift and efficient analysis of this data. Existing approaches employ cloud infrastructure to store and perform machine learning based analytics on this data. This centralized approach has limited ability to support analysis of real-time, large-scale streaming data due to network bandwidth and latency constraints between data source and cloud. We propose RealEdgeStream (RES) an edge enhanced stream analytics system for large-scale, high performance data analytics. The proposed approach investigates the problem of video stream analytics by proposing (i) filtration and (ii) identification phases. The filtration phase reduces the amount of data by filtering low value stream objects using configurable rules. The identification phase uses deep learning inference to perform analytics on the streams of interest. The stages are mapped onto available in-transit and cloud resources using a placement algorithm to satisfy the Quality of Service (QoS) constraints identified by a user. The job completion in the proposed system takes 49\% less time and saves 99\% bandwidth compared to a centralized cloud-only based approach

    Automatic and Efficient Fault Detection in Rotating Machinery using Sound Signals

    No full text
    Vibration and acoustic emission have received great attention of the research community for condition-based maintenance in rotating machinery. Several signal processing algorithms were either developed or used efficiently to detect and classify faults in bearings and gears. These signals are recorded, using sensors like tachometer or accelerometer, connected directly or mounted very close to the system under observation. This is not a feasible option in case of complex machinery and/or temperature and humidity. Therefore, it is required to sense the signals remotely, in order to reduce installation and maintenance cost. However, its installation far away from the intended device may pollute the required signal with other unwanted signals. In an attempt to address these issues, sound signal-based fault detection and classification in rotating bearings is presented. In this research work, audible sound of machine under test is captured using a single microphone and different statistical, spectral and spectro-temporal features are extracted. The selected features are then analyzed using different machine learning techniques, such as K-nearest neighbor (KNN) classifier, support vector machine (SVM), kernel liner discriminant analysis (KLDA) and sparse discriminant analysis (SDA). Simulation results show successful classification of faults into ball fault, inner and outer race faults. Best results were achieved using the KLDA followed by SDA, KNN and SVM. As far as features are concerned, the average FFT outperformed all the other features, followed by average PSD, RMS values of PSD, PSD and STFT

    Virtual reality based digital twin system for remote laboratories and online practical learning

    Get PDF
    There is a need for remote learning and virtual learning applications such as virtual reality (VR) and tablet-based solutions which the current pandemic has demonstrated. Creating complex learning scenarios by developers is highly time-consuming and can take over a year. There is a need to provide a simple method to enable lecturers to create their own content for their laboratory tutorials. Research is currently being undertaken into developing generic models to enable the semi-automatic creation of a virtual learning application. A case study describing the creation of a virtual learning application for an electrical laboratory tutorial is presented
    corecore