1,674 research outputs found
CODE: description language for wireless collaborating objects
This paper introduces CODE, a Description Language for Wireless Collaborating Objects (WCO), with the specific aim of enabling service management in smart environments. WCO extend the traditional model of wireless sensor networks by transferring additional intelligence and responsibility from the gateway level to the network. WCO are able to offer complex services based on cooperation among sensor nodes. CODE provides the vocabulary for describing the complex services offered by WCO. It enables description of services offered by groups, on-demand services, service interface and sub-services. The proposed methodology is based on XML, widely used for structured information exchange and collaboration. CODE can be directly implemented on the network gateway, while a lightweight binary version is stored and exchanged among sensor nodes. Experimental results show the feasibility and flexibility of using CODE as a basis for service management in WCO
TinyML: From Basic to Advanced Applications
TinyML tiene como objetivo la implementación de aplicaciones de aprendizaje automático en dispositivos de poco tamaño y baja potencia, como los microcontroladores. Normalmente los dispositivos periféricos necesitan estar conectados a centro de datos para poder ejecutar aplicaciones de aprendizaje automático. Sin embargo, este método no es posible en muchos escenarios, como en la falta de conectividad. Este estudio investiga las herramientas y técnicas utilizadas en TinyML, las limitaciones en la utilización de dispositivos de baja potencia y la viabilidad de implementar aplicaciones avanzadas de aprendizaje automático en microcontroladores. Se desarrollaron tres programas para comprobar la viabilidad de implementar aplicaciones de aprendizaje automático en microcontroladores. El primero, una aplicación capaz de reconocer un conjunto de palabras claves. El segundo, un programa capaz de entrenar un modelo de red neuronal en el microcontrolador siguiendo un enfoque de aprendizaje en línea. Y el tercero, un programa de aprendizaje federado capaz de entrenar un único modelo global con la agregación de modelos locales entrenados en múltiples microcontroladores. Los resultados muestran un óptimo rendimiento de las tres aplicaciones una vez desplegadas en los microcontroladores. El desarrollo de aplicaciones básicas de TinyML resulta sencillo una vez entendidos el proceso de aprendizaje automático. Sin embargo, el desarrollo de aplicaciones avanzadas es muy complejo, ya que requiere un profundo conocimiento tanto del aprendizaje automático como de los sistemas embebidos. Estos resultados demuestran la viabilidad de implementar con éxito aplicaciones avanzadas de aprendizaje automático en microcontroladores, y por lo tanto, desvelan un futuro brillante para TinyML.TinyML aims to implement machine learning (ML) applications on small, and low-powered devices like microcontrollers. Typically, edge devices need to be connected to data centers in order to run ML applications. However, this approach is not possible in many scenarios, such as lack of connectivity. This project investigates the tools and techniques used in TinyML, the constraints of using low-powered devices, and the feasibility of implementing advanced machine learning applications on microcontrollers. To test the feasibility of implementing ML applications on microcontrollers, three TinyML programs were developed. The first, a basic keyword spotting application able to recognize a set of words. The second, a program for training a neural network model on a microcontroller following an online learning approach. And the third, a federated learning program able to train a single global model with the aggregation of local models trained on multiple microcontrollers. The results show optimal performance in all three applications once deployed on microcontrollers. The development of basic TinyML applications is straightforward when the machine learning pipeline is understood. However, the development of advanced applications turned out to be very complex, as it requires a deep understanding of both machine learning and embedded systems. These results prove the feasibility of successfully implementing advanced ML applications on microcontrollers, and thus, unveil a bright future for TinyML
A Machine Learning-oriented Survey on Tiny Machine Learning
The emergence of Tiny Machine Learning (TinyML) has positively revolutionized
the field of Artificial Intelligence by promoting the joint design of
resource-constrained IoT hardware devices and their learning-based software
architectures. TinyML carries an essential role within the fourth and fifth
industrial revolutions in helping societies, economies, and individuals employ
effective AI-infused computing technologies (e.g., smart cities, automotive,
and medical robotics). Given its multidisciplinary nature, the field of TinyML
has been approached from many different angles: this comprehensive survey
wishes to provide an up-to-date overview focused on all the learning algorithms
within TinyML-based solutions. The survey is based on the Preferred Reporting
Items for Systematic Reviews and Meta-Analyses (PRISMA) methodological flow,
allowing for a systematic and complete literature survey. In particular,
firstly we will examine the three different workflows for implementing a
TinyML-based system, i.e., ML-oriented, HW-oriented, and co-design. Secondly,
we propose a taxonomy that covers the learning panorama under the TinyML lens,
examining in detail the different families of model optimization and design, as
well as the state-of-the-art learning techniques. Thirdly, this survey will
present the distinct features of hardware devices and software tools that
represent the current state-of-the-art for TinyML intelligent edge
applications. Finally, we discuss the challenges and future directions.Comment: Article currently under review at IEEE Acces
Exploring opportunities in TinyML
Internet of Things (IoT) has acquired useful and powerful advances thanks to the Machine Learning (ML) implementations. But the implementation of Machine Learning in IoT devices with data centers has some serious problems (data privacy, network bottleneck, etc). Tiny Machine Learning (TinyML) arose in order to have an independent edge device executing the ML program without the necessity of any data center. But there is still the need for high performance computers to train the ML model. But, can this situation improve? This project goes through TinyML and two TinyML techniques capable to train the ML model on-device (what we call TinyML On-Device Learning or TinyODL): TinyML with Online-Learning (TinyOL) and Federated Learning (FL). We study both techniques in a theoretical analysis and try to develop one TinyODL app.Internet of Things (IoT) ha obtingut uns forts avantatges molt usables gràcies a les implementacions del Machine Learning (ML). Però la implementació del Machine Learning en dispositius IoT utilitzant centres de dades porta una sèrie de problemes a tenir en compte (privacitat de les dades, el coll d'ampolla de la xarxa, etc.). Tiny Machine Learning (TinyML) va sorgir amb l'objectiu de tenir dispositious IoT independents executant el programa d'ML sense la necessitat d'un centre de dades. Però encara hi ha la necessitat de fer servir ordinadors d'alta potència per poder entrenar el model d'ML. Així i tot, es pot millorar aquesta situació? Aquest projecte estudia el TinyML i dues de les seves tècniques, del que anomenem TinyML On-Device Learning o TinyODL, capaces d'entrenar el model d'ML en el mateix dispositiu (on-device learning): TinyML with Online Learning (TinyOL) i Federated Learning (FL). S'estudien les dues tècniques des d'una anàlisi teòrica i provem de desenvolupar una aplicació TinyODL.Internet of Things (IoT) ha obtenido unas muy buenas y usables mejoras gracias a las implementaciones del Machine Learning (ML). Pero la implementación de Machine Learning en dispositivos IoT utilizando centros de datos conlleva una serie de problemas a tener en cuenta (privacidad de los datos, el cuello de botella de la red, etc.). Tiny Machine Learning (TinyML) surgió con el objetivo de tener dispotivios IoT independientes ejecutando el programa de ML sin la necesidad de un centro de datos. Pero aún existe la necesidad de usar ordenadores de alta potencia para poder entrenar el modelo de ML. Aún así, se puede mejorar esta situación? Este proyecto estudia el TinyML y dos de sus técnicas, de lo que llamamos TinyML On-Device Learning o TinyODL, capaces de entrenar el model de ML en el mismo dispotivio (on-device learning): TinyML with Online Learning (TinyOL) y Federated Learning (FL). Se estudian las dos técnicas desde un anáisis teórico y probamos de desarrollar una aplicación TinyODL
A multi-microcontroller-based hardware for deploying Tiny machine learning model
The tiny machine learning (TinyML) has been considered to applied on the edge devices where the resource-constrained micro-controller units (MCUs) were used. Finding a good platform to deploy the TinyML effectively is very crucial. The paper aims to propose a multiple micro-controller hardware platform for productively running the TinyML model. The proposed hardware consists of two dual-core MCUs. The first MCU is utilized for acquiring and processing input data, while the second is responsible for executing the trained TinyML network. Two MCUs communicate to each other using the universal asynchronous receiver-transmitter (UART) protocol. The multi-tasking programming technique is mainly applied on the first MCU to optimize the pre-processing new data. A three-phase motors faults classification TinyML model was deployed on the proposed system to evaluate the effectiveness. The experimental results prove that our proposed hardware platform was improved 34.8% the total inference time including pre-processing data of the proposed TinyML model in comparing with single micro-controller hardware platform
TinyML Inference Enablement and Acceleration on Microcontrollers The Case of Healthcare
Controlling high blood pressure can eliminate more than half of the deaths caused by cardiovascular diseases (CVDs). Towards this target, continuous BP monitoring is a must. The existing Convolutional Neural Network (CNN) -based solutions rely on server-like infrastructure with huge computation and memory capabilities. This entails these solutions impractical with several security, privacy, reliability, and latency concerns. To address the challenges, an alternative solution has merged to conduct the machine learning algorithms into tiny devices. The unprecedented boom in tinyML development also drives the high relevance of optimizing network inference strategies on resource-constrained microcontrollers (MCUs)
The contributions of the thesis are: First, the thesis contributes to the general field of tinyML by proposing novel techniques that enable the fitting of five popular CNNs - AlexNet, LeNet, SqueezeNet, ResNet, and MobileNet - into extremely-constrained edge devices with limited computation, memory, and power budget. The proposed techniques use a combination of novel architecture modifications, pruning, and quantization methods. Second, utilizing this stepping stone, the thesis proposes a tinyML-based solution to enable accurate and continuous BP estimation using only photoplethysmogram (PPG) signals. Third, the thesis proposes several techniques to accelerate the CNNs inference process. From a hardware perspective, we discuss architecture-aware accelerations with cache and multi-core specifications; from the software perspective, we develop application-aware optimizations with an existing real-time compatible C library to maximize the computation and intermediate buffer reuse. Those solutions only require the general MCU features thus demonstrating board generalization across various networks and devices.
We conduct an extensive evaluation using thousands of real Intensive Care Unit (ICU) patient data and several tiny edge devices and all the five aforementioned CNNs. Results show comparable accuracy to server-based solutions. The proposed acceleration strategies achieve up to 71% reduction in inference latency.ThesisMaster of Applied Science (MASc)Having continuous Blood Pressure (BP) monitoring is a must to prevent cardiovascular diseases (CVDs). This thesis presents a new solution using small, efficient devices and advanced machine learning algorithms to realize real-time BP estimation. Similar to how traditional BP measurements are taken by the pulse rate, the small devices use the changes in blood volume as input, instantly inferring the BP.
The thesis aims at addressing the challenges when incorporating large network capacities into tiny devices.
The contributions are as follows: First, this thesis explores a variety of optimization strategies to shrink the machine learning networks while achieving comparable accuracy. Those techniques are not tied to any specific framework, making them flexible and portable. Second, this thesis investigates several acceleration techniques from both software and hardware perspective. With the novel optimization strategies, the work demonstrates accurate and efficient BP monitoring
Intelligence at the Extreme Edge: A Survey on Reformable TinyML
The rapid miniaturization of Machine Learning (ML) for low powered processing
has opened gateways to provide cognition at the extreme edge (E.g., sensors and
actuators). Dubbed Tiny Machine Learning (TinyML), this upsurging research
field proposes to democratize the use of Machine Learning (ML) and Deep
Learning (DL) on frugal Microcontroller Units (MCUs). MCUs are highly
energy-efficient pervasive devices capable of operating with less than a few
Milliwatts of power. Nevertheless, many solutions assume that TinyML can only
run inference. Despite this, growing interest in TinyML has led to work that
makes them reformable, i.e., work that permits TinyML to improve once deployed.
In line with this, roadblocks in MCU based solutions in general, such as
reduced physical access and long deployment periods of MCUs, deem reformable
TinyML to play a significant part in more effective solutions. In this work, we
present a survey on reformable TinyML solutions with the proposal of a novel
taxonomy for ease of separation. Here, we also discuss the suitability of each
hierarchical layer in the taxonomy for allowing reformability. In addition to
these, we explore the workflow of TinyML and analyze the identified deployment
schemes and the scarcely available benchmarking tools. Furthermore, we discuss
how reformable TinyML can impact a few selected industrial areas and discuss
the challenges and future directions
A review of TinyML
In this current technological world, the application of machine learning is
becoming ubiquitous. Incorporating machine learning algorithms on extremely
low-power and inexpensive embedded devices at the edge level is now possible
due to the combination of the Internet of Things (IoT) and edge computing. To
estimate an outcome, traditional machine learning demands vast amounts of
resources. The TinyML concept for embedded machine learning attempts to push
such diversity from usual high-end approaches to low-end applications. TinyML
is a rapidly expanding interdisciplinary topic at the convergence of machine
learning, software, and hardware centered on deploying deep neural network
models on embedded (micro-controller-driven) systems. TinyML will pave the way
for novel edge-level services and applications that survive on distributed edge
inferring and independent decision-making rather than server computation. In
this paper, we explore TinyML's methodology, how TinyML can benefit a few
specific industrial fields, its obstacles, and its future scope
Exploring opportunities in TinyML
Internet of Things (IoT) has acquired useful and powerful advances thanks to the Machine Learning (ML) implementations. But the implementation of Machine Learning in IoT devices with data centers has some serious problems (data privacy, network bottleneck, etc). Tiny Machine Learning (TinyML) arose in order to have an independent edge device executing the ML program without the necessity of any data center. But there is still the need for high performance computers to train the ML model. But, can this situation improve? This project goes through TinyML and two TinyML techniques capable to train the ML model on-device (what we call TinyML On-Device Learning or TinyODL): TinyML with Online-Learning (TinyOL) and Federated Learning (FL). We study both techniques in a theoretical analysis and try to develop one TinyODL app.Internet of Things (IoT) ha obtingut uns forts avantatges molt usables gràcies a les implementacions del Machine Learning (ML). Però la implementació del Machine Learning en dispositius IoT utilitzant centres de dades porta una sèrie de problemes a tenir en compte (privacitat de les dades, el coll d'ampolla de la xarxa, etc.). Tiny Machine Learning (TinyML) va sorgir amb l'objectiu de tenir dispositious IoT independents executant el programa d'ML sense la necessitat d'un centre de dades. Però encara hi ha la necessitat de fer servir ordinadors d'alta potència per poder entrenar el model d'ML. Així i tot, es pot millorar aquesta situació? Aquest projecte estudia el TinyML i dues de les seves tècniques, del que anomenem TinyML On-Device Learning o TinyODL, capaces d'entrenar el model d'ML en el mateix dispositiu (on-device learning): TinyML with Online Learning (TinyOL) i Federated Learning (FL). S'estudien les dues tècniques des d'una anàlisi teòrica i provem de desenvolupar una aplicació TinyODL.Internet of Things (IoT) ha obtenido unas muy buenas y usables mejoras gracias a las implementaciones del Machine Learning (ML). Pero la implementación de Machine Learning en dispositivos IoT utilizando centros de datos conlleva una serie de problemas a tener en cuenta (privacidad de los datos, el cuello de botella de la red, etc.). Tiny Machine Learning (TinyML) surgió con el objetivo de tener dispotivios IoT independientes ejecutando el programa de ML sin la necesidad de un centro de datos. Pero aún existe la necesidad de usar ordenadores de alta potencia para poder entrenar el modelo de ML. Aún así, se puede mejorar esta situación? Este proyecto estudia el TinyML y dos de sus técnicas, de lo que llamamos TinyML On-Device Learning o TinyODL, capaces de entrenar el model de ML en el mismo dispotivio (on-device learning): TinyML with Online Learning (TinyOL) y Federated Learning (FL). Se estudian las dos técnicas desde un anáisis teórico y probamos de desarrollar una aplicación TinyODL
Wet TinyML: Chemical Neural Network Using Gene Regulation and Cell Plasticity
In our earlier work, we introduced the concept of Gene Regulatory Neural
Network (GRNN), which utilizes natural neural network-like structures inherent
in biological cells to perform computing tasks using chemical inputs. We define
this form of chemical-based neural network as Wet TinyML. The GRNN structures
are based on the gene regulatory network and have weights associated with each
link based on the estimated interactions between the genes. The GRNNs can be
used for conventional computing by employing an application-based search
process similar to the Network Architecture Search. This study advances this
concept by incorporating cell plasticity, to further exploit natural cell's
adaptability, in order to diversify the GRNN search that can match larger
spectrum as well as dynamic computing tasks. As an example application, we show
that through the directed cell plasticity, we can extract the mathematical
regression evolution enabling it to match to dynamic system applications. We
also conduct energy analysis by comparing the chemical energy of the GRNN to
its silicon counterpart, where this analysis includes both artificial neural
network algorithms executed on von Neumann architecture as well as neuromorphic
processors. The concept of Wet TinyML can pave the way for the new emergence of
chemical-based, energy-efficient and miniature Biological AI.Comment: Accepted as a full paper by the tinyML Research Symposium 202
- …
