205 research outputs found

    PROBABILISTIC SHORT TERM SOLAR DRIVER FORECASTING WITH NEURAL NETWORK ENSEMBLES

    Get PDF
    Commonly utilized space weather indices and proxies drive predictive models for thermosphere density, directly impacting objects in low-Earth orbit (LEO) by influencing atmospheric drag forces. A set of solar proxies and indices (drivers), F10.7, S10.7, M10.7, and Y10.7, are created from a mixture of ground based radio observations and satellite instrument data. These solar drivers represent heating in various levels of the thermosphere and are used as inputs by the JB2008 empirical thermosphere density model. The United States Air Force (USAF) operational High Accuracy Satellite Drag Model (HASDM) relies on JB2008, and forecasts of solar drivers made by a linear algorithm, to produce forecasts of density. Density forecasts are useful to the space traffic management community and can be used to determine orbital state and probability of collision for space objects. In this thesis, we aim to provide improved and probabilistic forecasting models for these solar drivers, with a focus on providing first time probabilistic models for S10.7, M10.7, and Y10.7. We introduce auto-regressive methods to forecast solar drivers using neural network ensembles with multi-layer perceptron (MLP) and long-short term memory (LSTM) models in order to improve on the current operational forecasting methods. We investigate input data manipulation methods such as backwards averaging, varied lookback, and PCA rotation for multivariate prediction. We also investigate the differences associated with multi-step and dynamic prediction methods. A novel method for splitting data, referred to as striped sampling, is introduced to produce statistically consistent machine learning data sets. We also investigate the effects of loss function on forecasting performance and uncertainty estimates, as well as investigate novel ensemble weighting methods. We show the best models for univariate forecasting are ensemble approaches using multi step or a combination of multi step and dynamic predictions. Nearly all univariate approaches offer an improvement, with best models improving between 48 and 59% on relative mean squared error (MSE) with respect to persistence, which is used as the baseline model in this work. We show also that a stacked neural network ensemble approach significantly outperforms the operational linear method. When using MV-MLE (multivariate multi-lookback ensemble), we see improvements in performance error metrics over the operational method on all drivers. The multivariate approach also yields an improvement of root mean squared error (RMSE) for F10.7, S10.7, M10.7, and Y10.7 of 17.7%, 12.3%, 13.8%, 13.7% respectively, over the current operational method. We additionally provide the first probabilistic forecasting models for S10.7, M10.7, and Y10.7. Ensemble approaches are leveraged to provide a distribution of predicted values, allowing an investigation into robustness and reliability (R&R) of uncertainty estimates, using the calibration error score (CES) metric and calibration curves. Univariate models provided similar uncertainty estimates as other works, while improving on performance metrics. We also produce probabilistic forecasts using MV-MLE, which are well calibrated for all drivers, providing an average CES of 5.63%

    Machine learning in portfolio management

    Get PDF
    Financial markets are difficult learning environments. The data generation process is time-varying, returns exhibit heavy tails and signal-to-noise ratio tends to be low. These contribute to the challenge of applying sophisticated, high capacity learning models in financial markets. Driven by recent advances of deep learning in other fields, we focus on applying deep learning in a portfolio management context. This thesis contains three distinct but related contributions to literature. First, we consider the problem of neural network training in a time-varying context. This results in a neural network that can adapt to a data generation process that changes over time. Second, we consider the problem of learning in noisy environments. We propose to regularise the neural network using a supervised autoencoder and show that this improves the generalisation performance of the neural network. Third, we consider the problem of quantifying forecast uncertainty in time-series with volatility clustering. We propose a unified framework for the quantification of forecast uncertainty that results in uncertainty estimates that closely match actual realised forecast errors in cryptocurrencies and U.S. stocks

    Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence

    Get PDF
    This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C1011198) , (Institute for Information & communications Technology Planning & Evaluation) (IITP) grant funded by the Korea government (MSIT) under the ICT Creative Consilience Program (IITP-2021-2020-0-01821) , and AI Platform to Fully Adapt and Reflect Privacy-Policy Changes (No. 2022-0-00688).Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI mode ľs decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.National Research Foundation of Korea Ministry of Science, ICT & Future Planning, Republic of Korea Ministry of Science & ICT (MSIT), Republic of Korea 2021R1A2C1011198Institute for Information amp; communications Technology Planning amp; Evaluation) (IITP) - Korea government (MSIT) under the ICT Creative Consilience Program IITP-2021-2020-0-01821AI Platform to Fully Adapt and Reflect Privacy-Policy Changes2022-0-0068

    Deep Neural Network Compression with Filter Pruning

    Get PDF
    The rapid development of convolutional neural networks (CNNs) in computer vision tasks has inspired researchers to apply their potential to embedded or mobile devices. However, it typically requires a large amount of computation and memory footprint, limiting their deployment in those resource-limited systems. Therefore, how to compress complex networks while maintaining competitive performance has become the focus of attention in recent years. On the subject of network compression, filter pruning methods that achieve structured compact model by finding and removing redundant filters, have attracted widespread attention. Inspired by previous dedicated works, this thesis focuses on the way to obtain the compact model while maximizing the retention of the original model performance. In particular, aiming at the limitations of choosing filters on the existing popular pruning methods, several novel filter pruning strategies are proposed to find and remove redundant filters more accurately to reduce the performance loss of the model caused by pruning. For instance, the filter pruning method with an attention mechanism (Chapter 3), data-dependent filter pruning guided by LSTM (Chapter 4), and filter pruning with uniqueness mechanism in the frequency domain (Chapter 5). This thesis first addresses the filter pruning issue from a global perspective. To this end, we propose a new scheme, termed Pruning Filter with an Attention Mechanism (PFAM). That is, by establishing the dependency/relationship between filters at each layer, we explore the long-term dependence between filters via attention module in order to choose the tobe-pruned filters. Unlike prior approaches that identify the to-be-pruned filters simply based on their intrinsic properties, the less correlated filters are first pruned after the pruning step in the current training epoch and then reconstructed and updated during the subsequent training epoch. Thus, the compressed network model can be achieved without the requirement for a pre-trained model since input data can be manipulated with the maximum information maintained when the original training strategy is executed. Next, it is noticed that most existing pruning algorithms seek to prune the filter layer by layer. Specifically, they guide filter pruning at each layer by setting a global pruning rate, which indicates that each convolutional layer is treated equally without regard to its depth and width. In this situation, we argue that the convolutional layers in the network also have varying degrees of significance. Besides, we propose that selecting the appropriate layers for pruning is more reasonable since it can result in more complexity reduction with less performance loss by keeping and removing more filters in those critical and nonsignificant layers, respectively. In order to do this, long short-term memory (LSTM) is employed to learn the hierarchical properties of a network and to generalize a global network pruning scheme. On top of that, we present a data-dependent soft pruning strategy named Squeeze-Excitation-Pruning (SEP), which does not physically prune any filters but removes specific kernels involved in calculating forward and backward propagations based on the pruning scheme. Doing so can further decrease the model’s performance decline while achieving a deep model compression. Lastly, we transfer the concept of relationship from the filter level to the feature map level because the feature maps can reflect the comprehensive information of both input data and filters. Hence, we propose Filter Pruning with Uniqueness Mechanism in the Frequency Domain (FPUM) to serve as a guideline for the filter pruning strategy by generating the correlation between feature maps. Specifically, we first transfer features to the frequency domain by Discrete Cosine Transform (DCT). Then, for each feature map, we compute a uniqueness score, which measures its probability of being replaced by others. Doing so allows us to prune the filters corresponding to the low-uniqueness maps without significant performance degradation. In addition, our strategy is more resistant to noise than spatial methods, further enhancing the network’s compactness while maintaining performance, as the critical pruning clues are more concentrated following DCT

    Online learning in financial time series

    Get PDF
    We wish to understand if additional learning forms can be combined with sequential optimisation to provide superior benefit over batch learning in various tasks operating in financial time series. In chapter 4, Online learning with radial basis function networks, we provide multi-horizon forecasts on the returns of financial time series. Our sequentially optimised radial basis function network (RBFNet) outperforms a random-walk baseline and several powerful supervised learners. Our RBFNets naturally measure the similarity between test samples and prototypes that capture the characteristics of the feature space. In chapter 5, Reinforcement learning for systematic FX trading, we perform feature representation transfer from an RBFNet to a direct, recurrent reinforcement learning (DRL) agent. Earlier academic work saw mixed results. We use better features, second-order optimisation methods and adapt our model parameters sequentially. As a result, our DRL agents cope better with statistical changes to the data distribution, achieving higher risk-adjusted returns than a funding and a momentum baseline. In chapter 6, The recurrent reinforcement learning crypto agent, we construct a digital assets trading agent that performs feature space representation transfer from an echo state network to a DRL agent. The agent learns to trade the XBTUSD perpetual swap contract on BitMEX. Our meta-model can process data as a stream and learn sequentially; this helps it cope with the nonstationary environment. In chapter 7, Sequential asset ranking in nonstationary time series, we create an online learning long/short portfolio selection algorithm that can detect the best and worst performing portfolio constituents that change over time; in particular, we successfully handle the higher transaction costs associated with using daily-sampled data, and achieve higher total and risk-adjusted returns than the long-only holding of the S&P 500 index with hindsight

    Utforsking av overgangen fra tradisjonell dataanalyse til metoder med maskin- og dyp læring

    Get PDF
    Data analysis methods based on machine- and deep learning approaches are continuously replacing traditional methods. Models based on deep learning (DL) are applicable to many problems and often have better prediction performance compared to traditional methods. One major difference between the traditional methods and machine learning (ML) approaches is the black box aspect often associated with ML and DL models. The use of ML and DL models offers many opportunities but also challenges. This thesis explores some of these opportunities and challenges of DL modelling with a focus on applications in spectroscopy. DL models are based on artificial neural networks (ANNs) and are known to automatically find complex relations in the data. In Paper I, this property is exploited by designing DL models to learn spectroscopic preprocessing based on classical preprocessing techniques. It is shown that the DL-based preprocessing has some merits with regard to prediction performance, but there is considerable extra effort required when training and tuning these DL models. The flexibility of ANN architecture designs is further studied in Paper II when a DL model for multiblock data analysis is proposed which can also quantify the importance of each data block. A drawback of the DL models is the lack of interpretability. To address this, a different modelling approach is taken in Paper III where the focus is to use DL models in such a way as to retain as much interpretability as possible. The paper presents the concept of non-linear error modelling, where the DL model is used to model the residuals of the linear model instead of the raw input data. The concept is essentially a shrinking of the black box aspect since the majority of the data modelling is done by a linear interpretable model. The final topic explored in this thesis is a more traditional modelling approach inspired by DL techniques. Data sometimes contain intrinsic subgroups which might be more accurately modelled separately than with a global model. Paper IV presents a modelling framework based on locally weighted models and fuzzy partitioning that automatically finds relevant clusters and combines the predictions of each local model. Compared to a DL model, the locally weighted modelling framework is more transparent. It is also shown how the framework can utilise DL techniques to be scaled to problems with huge amounts of data.Metoder basert på maskin- og dyp læring erstatter i stadig økende grad tradisjonell datamodellering. Modeller basert på dyp læring (DL) kan brukes på mange problemer og har ofte bedre prediksjonsevne sammenlignet med tradisjonelle metoder. En stor forskjell mellom tradisjonelle metoder og metoder basert på maskinlæring (ML) er den "svarte boksen" som ofte forbindes med ML- og DL-modeller. Bruken av ML- og DL-modeller åpner opp for mange muligheter, men også utfordringer. Denne avhandlingen utforsker noen av disse mulighetene og utfordringene med DL modeller, fokusert på anvendelser innen spektroskopi. DL-modeller er basert på kunstige nevrale nettverk (KNN) og er kjent for å kunne finne komplekse relasjoner i data. I Artikkel I blir denne egenskapen utnyttet ved å designe DL-modeller som kan lære spektroskopisk preprosessering basert på klassiske preprosesseringsteknikker. Det er vist at DL-basert preprosessering kan være gunstig med tanke på prediksjonsevne, men det kreves større innsats for å trene og justere disse DL-modellene. Fleksibiliteten til design av KNN-arkitekturer er studert videre i Artikkel II hvor en DL-modell for analyse av multiblokkdata er foreslått, som også kan kvantifisere viktigheten til hver datablokk. En ulempe med DL-modeller er manglende muligheter for tolkning. For å adressere dette, er en annen modelleringsframgangsmåte brukt i Artikkel III, hvor fokuset er på å bruke DL-modeller på en måte som bevarer mest mulig tolkbarhet. Artikkelen presenterer konseptet ikke-lineær feilmodellering, hvor en DL-modell blir bruk til å modellere residualer fra en lineær modell i stedet for rå inputdata. Konseptet kan ses på som en krymping av den svarte boksen, siden mesteparten av datamodelleringen er gjort av en lineær, tolkbar modell. Det siste temaet som er utforsket i denne avhandlingen er nærmere en tradisjonell modelleringsvariant, men som er inspirert av DL-teknikker. Data har av og til iboende undergrupper som kan bli mer nøyaktig modellert hver for seg enn med en global modell. Artikkel IV presenterer et modelleringsrammeverk basert på lokalt vektede modeller og "fuzzy" oppdeling, som automatisk finner relevante grupperinger ("clusters") og kombinerer prediksjonene fra hver lokale modell. Sammenlignet med en DL-modell, er det lokalt vektede modelleringsrammeverket mer transparent. Det er også vist hvordan rammeverket kan utnytte teknikker fra DL for å skalere opp til problemer med store mengder data

    Statistical/climatic models to predict and project extreme precipitation events dominated by large-scale atmospheric circulation over the central-eastern China

    Get PDF
    Global warming has posed non-negligible effects on regional extreme precipitation changes and increased the uncertainties when meteorologists predict such extremes. More importantly, floods, landslides, and waterlogging caused by extreme precipitation have had catastrophic societal impacts and led to steep economic damages across the world, in particular over central-eastern China (CEC), where heavy precipitation due to the Meiyu-front and typhoon activities often causes flood disaster. There is mounting evidence that the anomaly atmospheric circulation systems and water vapor transport have a dominant role in triggering and maintaining the processes of regional extreme precipitation. Both understanding and accurately predicting extreme precipitation events based on these anomalous signals are hot issues in the field of hydrological research. In this thesis, the self-organizing map (SOM) and event synchronization were used to cluster the large-scale atmospheric circulation reflected by geopotential height at 500 hPa and to quantify the level of synchronization between the identified circulation patterns with extreme precipitation over CEC. With the understanding of which patterns were associated with extreme precipitation events, and corresponding water vapor transport fields, a hybrid deep learning model of multilayer perceptron and convolutional neural networks (MLP-CNN) was proposed to achieve the binary predictions of extreme precipitation. The inputs to MLP-CNN were the anomalous fields of GP at 500 hPa and vertically integrated water vapor transport (IVT). Compared with the original MLP, CNN, and two other machine learning models (random forest and support vector machine), MLP-CNN showed the best performance. Additionally, since the coarse spatial resolution of global circulation models and its large biases in extremes precipitation estimations, a new precipitation downscaling framework that combination of ensemble-learning and nonhomogeneous hidden Markov model (Ensemble-NHMM) was developed, to improve the reliabilities of GCMs in historical simulations and future projection. The performances of downscaled precipitation from reanalysis and GCM datasets were validated against the gauge observations and also compared with the results of traditional NHMM. Finally, the Ensemble-NHMM downscaling model was applied to future scenario data of GCM. On the projections of change trends in precipitation over CEC in the early-, medium- and late- 21st centuries under different emission scenarios, the possible causes were discussed in term of both thermodynamic and dynamic factors. Main results are enumerated as follows. (1) The large-scale atmospheric circulation patterns and associated water vapor transport fields synchronized with extreme precipitation events over CEC were quantitatively identified, as well as the contribution of circulation pattern changes to extreme precipitation changes and their teleconnection with the interdecadal modes of the ocean. Firstly, based on the nonparametric Pettitt test, it was found that 23% of rain gauges had significant abrupt changes in the annual extreme precipitation from 1960 to 2015. The average change point in the annual extreme precipitation frequency and amount occurred near 1989. Complex network analysis showed that the rain gauges highly synchronized on extreme precipitation events can be clustered into four clusters based on modularity information. Secondly, the dominant circulation patterns over CEC were robustly identified based on the SOM. From the period 1960–1989 to 1990–2015, the categories of identified circulation patterns generally remain almost unchanged. Among these, the circulation patterns characterized by obvious positive anomalies of 500 hPa geopotential height over the Eastern Eurasia continent and negative values over the surrounding oceans are highly synchronized with extreme precipitation events. An obvious water vapor channel originating from the northern Indian Ocean driven by the southwesterly airflow was observed for the representative circulation patterns (synchronized with extreme precipitation). Finally, the circulation pattern changes produced an increase in extreme precipitation frequency from 1960–1989 to 1990–2015. Empirical mode decomposition of the annual frequency variation signals in the representative circulation pattern showed that the 2–4 yr oscillation in the annual frequency was closely related to the phase of El Niño and Southern Oscillation (ENSO); while the 20–25 yr and 42–50 yr periodic oscillations were responses to the Pacific Decadal Oscillation and the Atlantic Multidecadal Oscillation. (2) A regional extreme precipitation prediction model was constructed. Two deep learning models-MLP and CNN were linearly stacked and used two atmospheric variables associated with extreme precipitation, that is, geopotential height at 500 hPa and IVT. The hybrid model can learn both the local-scale information with MLP and large-scale circulation information with CNN. Validation results showed that the MLP-CNN model can predict extreme or non-extreme precipitation days with an overall accuracy of 86%. The MLP-CNN also showed excellent seasonal transferability with an 81% accuracy on the testing set from different seasons of the training set. MLP-CNN significantly outperformed over other machine learning models, including MLP, CNN, random forest, and support vector machine. Additionally, the MLP-CNN can be used to produce precursor signals by 1 to 2 days, though the accuracy drops quickly as the number of precursor days increases. (3) The GCM seriously underestimated extreme precipitation over CEC but showed convincing results for reproducing large-scale atmospheric circulation patterns. The accuracies of 10 GCMs in extreme precipitation and large-scale atmospheric circulation simulations were evaluated. First, five indices were selected to measure the characteristics of extreme precipitation and the performances of GCMs were compared to the gauge-based daily precipitation analysis dataset over the Chinese mainland. The results showed that except for FGOALS-g3, most GCMs can reproduce the spatial distribution characteristics of the average precipitation from 1960 to 2015. However, all GCMs failed to accurately estimate the extreme precipitation with large underestimation (relative bias exceeds 85%). In addition, using the circulation patterns identified by the fifth-generation reanalysis data (ERA5) as benchmarks, GCMs can reproduce most CP types for the periods 1960–1989 and 1990–2015. In terms of the spatial similarity of the identified CPs, MPI-ESM1-2-HR was superior. (4) To improve the reliabilities of precipitation simulations and future projections from GCMs, a new statistical downscaling framework was proposed. This framework comprises two models, ensemble learning and NHMM. First, the extreme gradient boosting (XGBoost) and random forest (RF) were selected as the basic- and meta- classifiers for constructing the ensemble learning model. Based on the top 50 principal components of GP at 500 hPa and IVT, this model was trained to predict the occurrence probabilities for the different levels of daily precipitation (no rain, very light, light, moderate, and heavy precipitation) aggregated by multi-sites. Confusion matrix results showed that the ensemble learning model had sufficient accuracy (>88%) in classifying no rain or rain days and (>83%) predicting moderate precipitation events. Subsequently, precipitation downscaling was done using the probability sequences of daily precipitation as large-scale predictors to NHMM. Statistical metrics showed that the Ensemble-NHMM downscaled results matched best to the gauge observations in precipitation variabilities and extreme precipitation simulations, compared with the result from the one that directly used circulation variables as predictors. Finally, the downscaling model also performed well in the historical simulations of MPI-ESM1-2-HR, which reproduced the change trends of annual precipitation and the means of total extreme precipitation index. (5) Three climate scenarios with different Shared Socioeconomic Pathways and Representative Concentration Pathways (SSPs) were selected to project the future precipitation change trends. The Ensemble-NHMM downscaling model was applied to the scenario data from MPI-ESM1-2-HR. Projection results showed that the CEC would receive more precipitation in the future by ~30% through the 2075–2100 period. Compared to the recent 26-year epoch (1990–2015), the frequency and magnitude of extreme precipitation would increase by 21.9–48.1% and 12.3–38.3% respectively under the worst emission scenario (SSP585). In particular, the south CEC region is projected to receive more extreme precipitation than the north. Investigations of thermodynamic and dynamic factors showed that climate warming would increase the probability of stronger water vapor convergence over CEC. More wet weather states due to the enhanced water vapor transport, as well as the increased favoring large-scale atmospheric circulation and the strengthen pressure gradient would be the factors for the increased precipitation

    Ensembles of Pruned Deep Neural Networks for Accurate and Privacy Preservation in IoT Applications

    Get PDF
    The emergence of the AIoT (Artificial Intelligence of Things) represents the powerful convergence of Artificial Intelligence (AI) with the expansive realm of the Internet of Things (IoT). By integrating AI algorithms with the vast network of interconnected IoT devices, we open new doors for intelligent decision-making and edge data analysis, transforming various domains from healthcare and transportation to agriculture and smart cities. However, this integration raises pivotal questions: How can we ensure deep learning models are aptly compressed and quantised to operate seamlessly on devices constrained by computational resources, without compromising accuracy? How can these models be effectively tailored to cope with the challenges of statistical heterogeneity and the uneven distribution of class labels inherent in IoT applications? Furthermore, in an age where data is a currency, how do we uphold the sanctity of privacy for the sensitive data that IoT devices incessantly generate while also ensuring the unhampered deployment of these advanced deep learning models? Addressing these intricate challenges forms the crux of this thesis, with its contributions delineated as follows: Ensyth: A novel approach designed to synthesise pruned ensembles of deep learning models, which not only makes optimal use of limited IoT resources but also ensures a notable boost in predictability. Experimental evidence gathered from CIFAR-10, CIFAR-5, and MNIST-FASHION datasets solidify its merit, especially given its capacity to achieve high predictability. MicroNets: Venturing into the realms of efficiency, this is a multi-phase pruning pipeline that fuses the principles of weight pruning, channel pruning. Its objective is clear: foster efficient deep ensemble learning, specially crafted for IoT devices. Benchmark tests conducted on CIFAR-10 and CIFAR-100 datasets demonstrate its prowess, highlighting a compression ratio of nearly 92%, with these pruned ensembles surpassing the accuracy metrics set by conventional models. FedNets: Recognising the challenges of statistical heterogeneity in federated learning and the ever-growing concerns of data privacy, this innovative federated learning framework is introduced. It facilitates edge devices in their collaborative quest to train ensembles of pruned deep neural networks. More than just training, it ensures data privacy remains uncompromised. Evaluations conducted on the Federated CIFAR-100 dataset offer a testament to its efficacy. In this thesis, substantial contributions have been made to the AIoT application domain. Ensyth, MicroNets, and FedNets collaboratively tackle the challenges of efficiency, accuracy, statistical heterogeneity arising from distributed class labels, and privacy concerns inherent in deploying AI applications on IoT devices. The experimental results underscore the effectiveness of these approaches, paving the way for their practical implementation in real-world scenarios. By offering an integrated solution that satisfies multiple key requirements simultaneously, this research brings us closer to the realisation of effective and privacy-preserved AIoT systems
    • …
    corecore