2 research outputs found

    Local federated learning at the network edge for efficient predictive analytics

    No full text
    The ability to perform computation on devices present in the Internet of Things (IoT) and Edge Computing (EC) environments leads to bandwidth, storage, and energy constraints, as most of these devices are limited with resources. Using such device computational capacity, coined as Edge Devices (EDs), in performing locally Machine Learning (ML) and analytics tasks enables accurate and real-time predictions at the network edge. The locally generated data in EDs is contextual and, for resource efficiency reasons, should not be distributed over the network. In such context, the local trained models need to adapt to occurring concept drifts and potential data distribution changes to guarantee a high prediction accuracy. We address the importance of personalization and generalization in EDs to adapt to data distribution over evolving environments. In the following work, we propose a methodology that relies on Federated Learning (FL) principles to ensure the generalization capability of the locally trained ML models. Moreover, we extend FL with Optimal Stopping Theory (OST) and adaptive weighting over personalized and generalized models to incorporate optimal model selection decision making. We contribute with a personalized, efficient learning methodology in EC environments that can swiftly select and switch models inside the EDs to provide accurate predictions towards changing environments. Theoretical analysis of the optimality and uniqueness of the proposed solution is provided. Additionally, comprehensive comparative and performance evaluation over real contextual data streams testing our methodology against current approaches in the literature for FL and centralized learning are provided concerning information loss and prediction accuracy metrics. We showcase improvement of the prediction quality towards FL-based approaches by at least 50% using our methodology

    AnaCoNGA: Analytical HW-CNN Co-Design Using Nested Genetic Algorithms

    Get PDF
    We present AnaCoNGA, an analytical co-design methodology, which enables two genetic algorithms to evaluate the fitness of design decisions on layer-wise quantization of a neural network and hardware (HW) resource allocation. We embed a hardware architecture search (HAS) algorithm into a quantization strategy search (QSS) algorithm to evaluate the hardware design Pareto-front of each considered quantization strategy. We harness the speed and flexibility of analytical HW-modeling to enable parallel HW-CNN co-design. With this approach, the QSS is focused on seeking high-accuracy quantization strategies which are guaranteed to have efficient hardware designs at the end of the search. Through AnaCoNGA, we improve the accuracy by 2.88 p.p. with respect to a uniform 2-bit ResNet20 on CIFAR-10, and achieve a 35% and 37% improvement in latency and DRAM accesses, while reducing LUT and BRAM resources by 9% and 59% respectively, when compared to a standard edge variant of the accelerator. The nested genetic algorithm formulation also reduces the search time by 51% compared to an equivalent, sequential co-design formulation
    corecore