3,968 research outputs found

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Computer Architectures to Close the Loop in Real-time Optimization

    Get PDF
    © 2015 IEEE.Many modern control, automation, signal processing and machine learning applications rely on solving a sequence of optimization problems, which are updated with measurements of a real system that evolves in time. The solutions of each of these optimization problems are then used to make decisions, which may be followed by changing some parameters of the physical system, thereby resulting in a feedback loop between the computing and the physical system. Real-time optimization is not the same as fast optimization, due to the fact that the computation is affected by an uncertain system that evolves in time. The suitability of a design should therefore not be judged from the optimality of a single optimization problem, but based on the evolution of the entire cyber-physical system. The algorithms and hardware used for solving a single optimization problem in the office might therefore be far from ideal when solving a sequence of real-time optimization problems. Instead of there being a single, optimal design, one has to trade-off a number of objectives, including performance, robustness, energy usage, size and cost. We therefore provide here a tutorial introduction to some of the questions and implementation issues that arise in real-time optimization applications. We will concentrate on some of the decisions that have to be made when designing the computing architecture and algorithm and argue that the choice of one informs the other

    Tiny Machine Learning Environment: Enabling Intelligence on Constrained Devices

    Get PDF
    Running machine learning algorithms (ML) on constrained devices at the extreme edge of the network is problematic due to the computational overhead of ML algorithms, available resources on the embedded platform, and application budget (i.e., real-time requirements, power constraints, etc.). This required the development of specific solutions and development tools for what is now referred to as TinyML. In this dissertation, we focus on improving the deployment and performance of TinyML applications, taking into consideration the aforementioned challenges, especially memory requirements. This dissertation contributed to the construction of the Edge Learning Machine environment (ELM), a platform-independent open-source framework that provides three main TinyML services, namely shallow ML, self-supervised ML, and binary deep learning on constrained devices. In this context, this work includes the following steps, which are reflected in the thesis structure. First, we present the performance analysis of state-of-the-art shallow ML algorithms including dense neural networks, implemented on mainstream microcontrollers. The comprehensive analysis in terms of algorithms, hardware platforms, datasets, preprocessing techniques, and configurations shows similar performance results compared to a desktop machine and highlights the impact of these factors on overall performance. Second, despite the assumption that TinyML only permits models inference provided by the scarcity of resources, we have gone a step further and enabled self-supervised on-device training on microcontrollers and tiny IoT devices by developing the Autonomous Edge Pipeline (AEP) system. AEP achieves comparable accuracy compared to the typical TinyML paradigm, i.e., models trained on resource-abundant devices and then deployed on microcontrollers. Next, we present the development of a memory allocation strategy for convolutional neural networks (CNNs) layers, that optimizes memory requirements. This approach reduces the memory footprint without affecting accuracy nor latency. Moreover, e-skin systems share the main requirements of the TinyML fields: enabling intelligence with low memory, low power consumption, and low latency. Therefore, we designed an efficient Tiny CNN architecture for e-skin applications. The architecture leverages the memory allocation strategy presented earlier and provides better performance than existing solutions. A major contribution of the thesis is given by CBin-NN, a library of functions for implementing extremely efficient binary neural networks on constrained devices. The library outperforms state of the art NN deployment solutions by drastically reducing memory footprint and inference latency. All the solutions proposed in this thesis have been implemented on representative devices and tested in relevant applications, of which results are reported and discussed. The ELM framework is open source, and this work is clearly becoming a useful, versatile toolkit for the IoT and TinyML research and development community

    Developing a wireless distributed embedded machine learning application for the Arduino Portenta H7

    Get PDF
    L'objectiu del projecte és desenvolupar una aplicació d'aprenentatge automàtic integrada distribuïda sense cables per a Arduino Portenta H7The aim of the project is to develop a wireless distributed embedded machine learning application for the Arduino Portenta H

    AnoML-IoT: An End to End Re-configurable Multi-protocol Anomaly Detection Pipeline for Internet of Things

    Get PDF
    The rapid development in ubiquitous computing has enabled the use of microcontrollers as edge devices. These devices are used to develop truly distributed IoT-based mechanisms where machine learning (ML) models are utilized. However, integrating ML models to edge devices requires an understanding of various software tools such as programming languages and domain-specific knowledge. Anomaly detection is one of the domains where a high level of expertise is required to achieve promising results. In this work, we present AnoML which is an end-to-end data science pipeline that allows the integration of multiple wireless communication protocols, anomaly detection algorithms, deployment to the edge, fog, and cloud platforms with minimal user interaction. We facilitate the development of IoT anomaly detection mechanisms by reducing the barriers that are formed due to the heterogeneity of an IoT environment. The proposed pipeline supports four main phases: (i) data ingestion, (ii) model training, (iii) model deployment, (iv) inference and maintaining. We evaluate the pipeline with two anomaly detection datasets while comparing the efficiency of several machine learning algorithms within different nodes. We also provide the source code (https://gitlab.com/IOTGarage/anoml-iot-analytics) of the developed tools which are the main components of the pipeline.Comment: Elsevier Internet of Things, Volume 16, 100437, December 202

    Block-Based Development of Mobile Learning Experiences for the Internet of Things

    Get PDF
    The Internet of Things enables experts of given domains to create smart user experiences for interacting with the environment. However, development of such experiences requires strong programming skills, which are challenging to develop for non-technical users. This paper presents several extensions to the block-based programming language used in App Inventor to make the creation of mobile apps for smart learning experiences less challenging. Such apps are used to process and graphically represent data streams from sensors by applying map-reduce operations. A workshop with students without previous experience with Internet of Things (IoT) and mobile app programming was conducted to evaluate the propositions. As a result, students were able to create small IoT apps that ingest, process and visually represent data in a simpler form as using App Inventor's standard features. Besides, an experimental study was carried out in a mobile app development course with academics of diverse disciplines. Results showed it was faster and easier for novice programmers to develop the proposed app using new stream processing blocks.Spanish National Research Agency (AEI) - ERDF fund

    Are Microcontrollers Ready for Deep Learning-Based Human Activity Recognition?

    Get PDF
    The last decade has seen exponential growth in the field of deep learning with deep learning on microcontrollers a new frontier for this research area. This paper presents a case study about machine learning on microcontrollers, with a focus on human activity recognition using accelerometer data. We build machine learning classifiers suitable for execution on modern microcontrollers and evaluate their performance. Specifically, we compare Random Forests (RF), a classical machine learning technique, with Convolutional Neural Networks (CNN), in terms of classification accuracy and inference speed. The results show that RF classifiers achieve similar levels of classification accuracy while being several times faster than a small custom CNN model designed for the task. The RF and the custom CNN are also several orders of magnitude faster than state-of-the-art deep learning models. On the one hand, these findings confirm the feasibility of using deep learning on modern microcontrollers. On the other hand, they cast doubt on whether deep learning is the best approach for this application, especially if high inference speed and, thus, low energy consumption is the key objective

    Widening Access to Applied Machine Learning with TinyML

    Get PDF
    Broadening access to both computational and educational resources is critical to diffusing machine-learning (ML) innovation. However, today, most ML resources and experts are siloed in a few countries and organizations. In this paper, we describe our pedagogical approach to increasing access to applied ML through a massive open online course (MOOC) on Tiny Machine Learning (TinyML). We suggest that TinyML, ML on resource-constrained embedded devices, is an attractive means to widen access because TinyML both leverages low-cost and globally accessible hardware, and encourages the development of complete, self-contained applications, from data collection to deployment. To this end, a collaboration between academia (Harvard University) and industry (Google) produced a four-part MOOC that provides application-oriented instruction on how to develop solutions using TinyML. The series is openly available on the edX MOOC platform, has no prerequisites beyond basic programming, and is designed for learners from a global variety of backgrounds. It introduces pupils to real-world applications, ML algorithms, data-set engineering, and the ethical considerations of these technologies via hands-on programming and deployment of TinyML applications in both the cloud and their own microcontrollers. To facilitate continued learning, community building, and collaboration beyond the courses, we launched a standalone website, a forum, a chat, and an optional course-project competition. We also released the course materials publicly, hoping they will inspire the next generation of ML practitioners and educators and further broaden access to cutting-edge ML technologies.Comment: Understanding the underpinnings of the TinyML edX course series: https://www.edx.org/professional-certificate/harvardx-tiny-machine-learnin

    Exploring opportunities in TinyML

    Get PDF
    Internet of Things (IoT) has acquired useful and powerful advances thanks to the Machine Learning (ML) implementations. But the implementation of Machine Learning in IoT devices with data centers has some serious problems (data privacy, network bottleneck, etc). Tiny Machine Learning (TinyML) arose in order to have an independent edge device executing the ML program without the necessity of any data center. But there is still the need for high performance computers to train the ML model. But, can this situation improve? This project goes through TinyML and two TinyML techniques capable to train the ML model on-device (what we call TinyML On-Device Learning or TinyODL): TinyML with Online-Learning (TinyOL) and Federated Learning (FL). We study both techniques in a theoretical analysis and try to develop one TinyODL app.Internet of Things (IoT) ha obtingut uns forts avantatges molt usables gràcies a les implementacions del Machine Learning (ML). Però la implementació del Machine Learning en dispositius IoT utilitzant centres de dades porta una sèrie de problemes a tenir en compte (privacitat de les dades, el coll d'ampolla de la xarxa, etc.). Tiny Machine Learning (TinyML) va sorgir amb l'objectiu de tenir dispositious IoT independents executant el programa d'ML sense la necessitat d'un centre de dades. Però encara hi ha la necessitat de fer servir ordinadors d'alta potència per poder entrenar el model d'ML. Així i tot, es pot millorar aquesta situació? Aquest projecte estudia el TinyML i dues de les seves tècniques, del que anomenem TinyML On-Device Learning o TinyODL, capaces d'entrenar el model d'ML en el mateix dispositiu (on-device learning): TinyML with Online Learning (TinyOL) i Federated Learning (FL). S'estudien les dues tècniques des d'una anàlisi teòrica i provem de desenvolupar una aplicació TinyODL.Internet of Things (IoT) ha obtenido unas muy buenas y usables mejoras gracias a las implementaciones del Machine Learning (ML). Pero la implementación de Machine Learning en dispositivos IoT utilizando centros de datos conlleva una serie de problemas a tener en cuenta (privacidad de los datos, el cuello de botella de la red, etc.). Tiny Machine Learning (TinyML) surgió con el objetivo de tener dispotivios IoT independientes ejecutando el programa de ML sin la necesidad de un centro de datos. Pero aún existe la necesidad de usar ordenadores de alta potencia para poder entrenar el modelo de ML. Aún así, se puede mejorar esta situación? Este proyecto estudia el TinyML y dos de sus técnicas, de lo que llamamos TinyML On-Device Learning o TinyODL, capaces de entrenar el model de ML en el mismo dispotivio (on-device learning): TinyML with Online Learning (TinyOL) y Federated Learning (FL). Se estudian las dos técnicas desde un anáisis teórico y probamos de desarrollar una aplicación TinyODL

    On-device training of neural network models on embedded systems

    Get PDF
    En els últims anys, la creixent popularitat de l'aprenentatge automàtic i la intel·ligència artificial ha tingut un impacte en els dispositius IoT. Malgrat la seva capacitat informàtica limitada, hi ha un interès creixent en entrenar xarxes neuronals directament en aquests dispositius. El mètode tradicional d'entrenar models en màquines potents i desplegar-los en microcontroladors presenta limitacions en adaptabilitat i privacitat de dades. No obstant això, no hi ha moltes implementacions disponibles per a l'entrenament en dispositiu amb xarxes neuronals. Frameworks populars com Tensorflow i PyTorch ofereixen mètodes per desplegar models entrenats en microcontroladors, però aquest enfocament no permet que els models segueixin aprenent de noves dades no vistes. Aquest treball presenta el desenvolupament d'una biblioteca d'aprenentatge automàtic escrita en C++ sense dependre de llibreries externes. La biblioteca se centra en la flexibilitat i adaptabilitat tant per a l'ús independent com per al Federated Learning en dispositius petits. Per validar la nostra implementació, vam comparar el rendiment dels models creats amb la nostra biblioteca amb una implementació de referència en Tensorflow/Keras. A més, vam desplegar les xarxes neuronals desenvolupades en la placa de microcontrolador Arduino Portenta i vam obtenir resultats prometedors amb l'entrenament en dispositiu.The popularity of machine learning and artificial intelligence has impacted IoT devices. Despite limited computing capacity, there's growing interest in training neural networks directly on these devices. Traditional methods of training models on powerful machines and deploying them to microcontrollers face limitations in adaptability and data privacy. However, there are few available implementations for on-device training with deep neural networks. Popular frameworks like Tensorflow and PyTorch allow deploying trained models on microcontrollers, but they don't support ongoing learning from new data. This work introduces a C++ machine learning library that doesn't rely on third-party dependencies. The library prioritizes flexibility and adaptability for standalone usage and federated learning on edge devices. To validate the implementation, we compare the performance of models created with our library to a reference implementation in Tensorflow/Keras. Additionally, we successfully deploy the developed neural networks on the Arduino Portenta microcontroller board, achieving promising results with on-device training
    corecore