190 research outputs found

    SmrtFridge: IoT-based, user interaction-driven food item & quantity sensing

    Get PDF
    National Research Foundation (NRF) Singapore under its IDM Futures and International Research Centres in Singapore Funding Initiativ

    LifeLearner: Hardware-Aware Meta Continual Learning System for Embedded Computing Platforms

    Full text link
    Continual Learning (CL) allows applications such as user personalization and household robots to learn on the fly and adapt to context. This is an important feature when context, actions, and users change. However, enabling CL on resource-constrained embedded systems is challenging due to the limited labeled data, memory, and computing capacity. In this paper, we propose LifeLearner, a hardware-aware meta continual learning system that drastically optimizes system resources (lower memory, latency, energy consumption) while ensuring high accuracy. Specifically, we (1) exploit meta-learning and rehearsal strategies to explicitly cope with data scarcity issues and ensure high accuracy, (2) effectively combine lossless and lossy compression to significantly reduce the resource requirements of CL and rehearsal samples, and (3) developed hardware-aware system on embedded and IoT platforms considering the hardware characteristics. As a result, LifeLearner achieves near-optimal CL performance, falling short by only 2.8% on accuracy compared to an Oracle baseline. With respect to the state-of-the-art (SOTA) Meta CL method, LifeLearner drastically reduces the memory footprint (by 178.7x), end-to-end latency by 80.8-94.2%, and energy consumption by 80.9-94.2%. In addition, we successfully deployed LifeLearner on two edge devices and a microcontroller unit, thereby enabling efficient CL on resource-constrained platforms where it would be impractical to run SOTA methods and the far-reaching deployment of adaptable CL in a ubiquitous manner. Code is available at https://github.com/theyoungkwon/LifeLearner.Comment: Accepted for publication at SenSys 202

    The State of Algorithmic Fairness in Mobile Human-Computer Interaction

    Full text link
    This paper explores the intersection of Artificial Intelligence and Machine Learning (AI/ML) fairness and mobile human-computer interaction (MobileHCI). Through a comprehensive analysis of MobileHCI proceedings published between 2017 and 2022, we first aim to understand the current state of algorithmic fairness in the community. By manually analyzing 90 papers, we found that only a small portion (5%) thereof adheres to modern fairness reporting, such as analyses conditioned on demographic breakdowns. At the same time, the overwhelming majority draws its findings from highly-educated, employed, and Western populations. We situate these findings within recent efforts to capture the current state of algorithmic fairness in mobile and wearable computing, and envision that our results will serve as an open invitation to the design and development of fairer ubiquitous technologies.Comment: arXiv admin note: text overlap with arXiv:2303.1558

    Experimental Testing and Validation of Adaptive Equalizer Using Machine Learning Algorithm

    Get PDF
    Due to the increasing demand for high-speed data transmission, wireless communication has become more advanced. Unfortunately, the various kinds of impairments that can occur when carrying data symbols through a wireless channel can affect the network performance. Some of the solutions that are proposed to address these issues include channel equalization, and that can be solved through machine learning techniques. In this paper, a hybrid approach is proposed that combines the features of tracking mode and training mode of adaptive equalizer. This method utilizes the concept of machine learning (ML) to classify different environments (highly, medium, low, open space cluttered) based on the measurements of their RF signal. The results of the study revealed that the proposed method can perform well in real-time deployments. The performance of ML algorithms namely Logistic Regression, KNN Classifier, SVM Classifier, Naive Bayes, Decision Tree classifier and Random Forest classifier is analyzed for different number of samples such as 10, 50 and 100. The performance of these algorithms is evaluated by comparing their accuracy, sensitivity, specificity, F1 score and Confusion Matrix. The objective of this study is to demonstrate that a single ML algorithm cannot perform well in all kinds of environments. In order to choose the best algorithm for a given environment, the decision device has to analyze the various factors that affect the performance of the system. For instance, the random forest classifier performed well in terms of accuracy (100 percent), specificity (100 percent), sensitivity (100 percent), and F1_score (100 percent). On the other hand, the logistic regression algorithm did not perform well in low cluttered environment

    On Lightweight Privacy-Preserving Collaborative Learning for IoT Objects

    Full text link
    The Internet of Things (IoT) will be a main data generation infrastructure for achieving better system intelligence. This paper considers the design and implementation of a practical privacy-preserving collaborative learning scheme, in which a curious learning coordinator trains a better machine learning model based on the data samples contributed by a number of IoT objects, while the confidentiality of the raw forms of the training data is protected against the coordinator. Existing distributed machine learning and data encryption approaches incur significant computation and communication overhead, rendering them ill-suited for resource-constrained IoT objects. We study an approach that applies independent Gaussian random projection at each IoT object to obfuscate data and trains a deep neural network at the coordinator based on the projected data from the IoT objects. This approach introduces light computation overhead to the IoT objects and moves most workload to the coordinator that can have sufficient computing resources. Although the independent projections performed by the IoT objects address the potential collusion between the curious coordinator and some compromised IoT objects, they significantly increase the complexity of the projected data. In this paper, we leverage the superior learning capability of deep learning in capturing sophisticated patterns to maintain good learning performance. Extensive comparative evaluation shows that this approach outperforms other lightweight approaches that apply additive noisification for differential privacy and/or support vector machines for learning in the applications with light data pattern complexities.Comment: 12 pages,IOTDI 201

    ML-based Secure Low-Power Communication in Adversarial Contexts

    Full text link
    As wireless network technology becomes more and more popular, mutual interference between various signals has become more and more severe and common. Therefore, there is often a situation in which the transmission of its own signal is interfered with by occupying the channel. Especially in a confrontational environment, Jamming has caused great harm to the security of information transmission. So I propose ML-based secure ultra-low power communication, which is an approach to use machine learning to predict future wireless traffic by capturing patterns of past wireless traffic to ensure ultra-low-power transmission of signals via backscatters. In order to be more suitable for the adversarial environment, we use backscatter to achieve ultra-low power signal transmission, and use frequency-hopping technology to achieve successful confrontation with Jamming information. In the end, we achieved a prediction success rate of 96.19%

    Sens-BERT: Enabling Transferability and Re-calibration of Calibration Models for Low-cost Sensors under Reference Measurements Scarcity

    Full text link
    Low-cost sensors measurements are noisy, which limits large-scale adaptability in airquality monitoirng. Calibration is generally used to get good estimates of air quality measurements out from LCS. In order to do this, LCS sensors are typically co-located with reference stations for some duration. A calibration model is then developed to transfer the LCS sensor measurements to the reference station measurements. Existing works implement the calibration of LCS as an optimization problem in which a model is trained with the data obtained from real-time deployments; later, the trained model is employed to estimate the air quality measurements of that location. However, this approach is sensor-specific and location-specific and needs frequent re-calibration. The re-calibration also needs massive data like initial calibration, which is a cumbersome process in practical scenarios. To overcome these limitations, in this work, we propose Sens-BERT, a BERT-inspired learning approach to calibrate LCS, and it achieves the calibration in two phases: self-supervised pre-training and supervised fine-tuning. In the pre-training phase, we train Sens-BERT with only LCS data (without reference station observations) to learn the data distributional features and produce corresponding embeddings. We then use the Sens-BERT embeddings to learn a calibration model in the fine-tuning phase. Our proposed approach has many advantages over the previous works. Since the Sens-BERT learns the behaviour of the LCS, it can be transferable to any sensor of the same sensing principle without explicitly training on that sensor. It requires only LCS measurements in pre-training to learn the characters of LCS, thus enabling calibration even with a tiny amount of paired data in fine-tuning. We have exhaustively tested our approach with the Community Air Sensor Network (CAIRSENSE) data set, an open repository for LCS.Comment: 1

    WCET of OCaml Bytecode on Microcontrollers: An Automated Method and Its Formalisation

    Get PDF
    Considering the bytecode representation of a program written in a high-level programming language enables portability of its execution as well as a factorisation of various possible analyses of this program. In this article, we present a method for computing the worst-case execution time (WCET) of an embedded bytecode program fit to run on a microcontroller. Due to the simple memory model of such a device, this automated WCET computation relies only on a control-flow analysis of the program, and can be adapted to multiple models of microcontrollers. This method evaluates the bytecode program using concrete as well as partially unknown values, in order to estimate its longest execution time. We present a software tool, based on this method, that computes the WCET of a synchronous embedded OCaml program. One key contribution of this article is a mechanically checked formalisation of the aforementioned method over an idealised bytecode language, as well as its proof of correctness
    corecore