5 research outputs found

    MODES: model-based optimization on distributed embedded systems

    Get PDF
    The predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) MODES-B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) MODES-I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy (MODES-B), run-time efficiency (MODES-I), and statistical stability for both modes, MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set

    FeFET-based Binarized Neural Networks Under Temperature-dependent Bit Errors

    Get PDF
    Ferroelectric FET (FeFET) is a highly promising emerging non-volatile memory (NVM) technology, especially for binarized neural network (BNN) inference on the low-power edge. The reliability of such devices, however, inherently depends on temperature. Hence, changes in temperature during run time manifest themselves as changes in bit error rates. In this work, we reveal the temperature-dependent bit error model of FeFET memories, evaluate its effect on BNN accuracy, and propose countermeasures. We begin on the transistor level and accurately model the impact of temperature on bit error rates of FeFET. This analysis reveals temperature-dependent asymmetric bit error rates. Afterwards, on the application level, we evaluate the impact of the temperature-dependent bit errors on the accuracy of BNNs. Under such bit errors, the BNN accuracy drops to unacceptable levels when no countermeasures are employed. We propose two countermeasures: (1) Training BNNs for bit error tolerance by injecting bit flips into the BNN data, and (2) applying a bit error rate assignment algorithm (BERA) which operates in a layer-wise manner and does not inject bit flips during training. In experiments, the BNNs, to which the countermeasures are applied to, effectively tolerate temperature-dependent bit errors for the entire range of operating temperature

    Statistical and Stochastic Learning Algorithms for Distributed and Intelligent Systems

    Get PDF
    In the big data era, statistical and stochastic learning for distributed and intelligent systems focuses on enhancing and improving the robustness of learning models that have become pervasive and are being deployed for decision-making in real-life applications including general classification, prediction, and sparse sensing. The growing prospect of statistical learning approaches such as Linear Discriminant Analysis and distributed Learning being used (e.g., community sensing) has raised concerns around the robustness of algorithm design. Recent work on anomalies detection has shown that such Learning models can also succumb to the so-called \u27edge-cases\u27 where the real-life operational situation presents data that are not well-represented in the training data set. Such cases have been the primary reason for quite a few mis-classification bottleneck problems recently. Although initial research has begun to address scenarios with specific Learning models, there remains a significant knowledge gap regarding the detection and adaptation of learning models to \u27edge-cases\u27 and extreme ill-posed settings in the context of distributed and intelligent systems. With this motivation, this dissertation explores the complex in several typical applications and associated algorithms to detect and mitigate the uncertainty which will substantially reduce the risk in using statistical and stochastic learning algorithms for distributed and intelligent systems

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters
    corecore