329 research outputs found
GNSS-free outdoor localization techniques for resource-constrained IoT architectures : a literature review
Large-scale deployments of the Internet of Things (IoT) are adopted for performance
improvement and cost reduction in several application domains. The four main IoT application
domains covered throughout this article are smart cities, smart transportation, smart healthcare, and
smart manufacturing. To increase IoT applicability, data generated by the IoT devices need to be
time-stamped and spatially contextualized. LPWANs have become an attractive solution for outdoor
localization and received significant attention from the research community due to low-power,
low-cost, and long-range communication. In addition, its signals can be used for communication
and localization simultaneously. There are different proposed localization methods to obtain the
IoT relative location. Each category of these proposed methods has pros and cons that make them
useful for specific IoT systems. Nevertheless, there are some limitations in proposed localization
methods that need to be eliminated to meet the IoT ecosystem needs completely. This has motivated
this work and provided the following contributions: (1) definition of the main requirements and
limitations of outdoor localization techniques for the IoT ecosystem, (2) description of the most
relevant GNSS-free outdoor localization methods with a focus on LPWAN technologies, (3) survey
the most relevant methods used within the IoT ecosystem for improving GNSS-free localization
accuracy, and (4) discussion covering the open challenges and future directions within the field.
Some of the important open issues that have different requirements in different IoT systems include
energy consumption, security and privacy, accuracy, and scalability. This paper provides an overview
of research works that have been published between 2018 to July 2021 and made available through
the Google Scholar database.5311-8814-F0ED | Sara Maria da Cruz Maia de Oliveira PaivaN/
A Survey From Distributed Machine Learning to Distributed Deep Learning
Artificial intelligence has achieved significant success in handling complex
tasks in recent years. This success is due to advances in machine learning
algorithms and hardware acceleration. In order to obtain more accurate results
and solve more complex problems, algorithms must be trained with more data.
This huge amount of data could be time-consuming to process and require a great
deal of computation. This solution could be achieved by distributing the data
and algorithm across several machines, which is known as distributed machine
learning. There has been considerable effort put into distributed machine
learning algorithms, and different methods have been proposed so far. In this
article, we present a comprehensive summary of the current state-of-the-art in
the field through the review of these algorithms. We divide this algorithms in
classification and clustering (traditional machine learning), deep learning and
deep reinforcement learning groups. Distributed deep learning has gained more
attention in recent years and most of studies worked on this algorithms. As a
result, most of the articles we discussed here belong to this category. Based
on our investigation of algorithms, we highlight limitations that should be
addressed in future research
Optimization and Mining Methods for Effective Real-Time Embedded Systems
L’Internet des objets (IoT) est le réseau d’objets interdépendants, comme les voitures autonomes, les appareils électroménagers, les téléphones intelligents et d’autres systèmes embarqués. Ces systèmes embarqués combinent le matériel, le logiciel et la connection réseau permettant le traitement de données à l’aide des puissants centres de données de l’informatique nuagique. Cependant, la croissance exponentielle des applications de l’IoT a remodelé
notre croyance sur l’informatique nuagique, et des certitudes durables sur ses capacités ont dû être mises à jour. De nos jours, l’informatique nuagique centralisé et classique rencontre plusieurs défis, tels que la latence du trafic, le temps de réponse et la confidentialité des données. Alors, la tendance dans le traitement des données générées par les dispositifs embarqués interconnectés consiste à faire plus de calcul au niveau du dispositif au bord du réseau. Cette possibilité de faire du traitement local aide à réduire la latence pour les applications temps
réel présentant des fortes contraintes temporelles. Aussi, ça permet d’améliorer le traitement des quantités massives de données générées par ces périphériques. Réussir cette transition nécessite la conception de systèmes embarqués de haute performance en explorant efficacement les alternatives de conception (i.e. Exploration efficace de l’espace des solutions), en optimisant la topologie de déploiement des applications temps réel sur des architectures multi-processeurs (i.e. la façon dont le logiciel utilise le matériel) , et des algorithme d’exploration permettant un fonctionnement plus intelligent de ces dispositifs. Des efforts de recherche récents ont conduit à diverses approches automatisées facilitant la conception et l’amélioration du fonctionnement des système embarqués. Cependant, ces techniques existantes présentent plusieurs défis majeurs. Ces défis sont fortement présents sur les systèmes embarqués temps réel. Quatre des principaux défis sont : (1) Le manque de techniques d’exploration de données en ligne permettant l’amélioration des performances
des systèmes embarqués. (2) L’utilisation inefficace des ressources informatiques des systèmes multiprocesseurs lors du déploiement de logiciels là dessus ; (3) L’exploration pseudo-aléatoire de l’espace des solutions (4) La sélection de la configuration appropriée à partir de la listes
des solutions optimales obtenue.----------ABSTRACT: The Internet of things (IoT) is the network of interrelated devices or objects, such as selfdriving cars, home appliances, smart-phones and other embedded computing systems. It combines hardware, software, and network connectivity enabling data processing using powerful
cloud data centers. However, the exponential rise of IoT applications reshaped our belief on the cloud computing, and long-lasting certainties about its capabilities had to be
updated. The classical centralized cloud computing is encountering several challenges, such as traffic latency, response time, and data privacy. Thus, the trend in the processing of the generated data of IoT inter-connected embedded devices has shifted towards doing more computation closer to the device in the edge of the network. This possibility to do on-device processing helps to reduce latency for critical real-time applications and better processing of the massive amounts of data being generated by the these devices. Succeeding this transition towards the edge computing requires the design of high-performance
embedded systems by efficiently exploring design alternatives (i.e. efficient Design Space
Exploration), optimizing the deployment topology of multi-processor based real-time embedded systems (i.e. the way the software utilizes the hardware), and light mining
techniques enabling smarter functioning of these devices.
Recent research efforts on embedded systems have led to various automated approaches facilitating the design and the improvement of their functioning. However, existing methods
and techniques present several major challenges. These challenges are more relevant when it comes to real-time embedded systems. Four of the main challenges are : (1) The lack of online data mining techniques that can enhance embedded computing systems functioning on the fly ; (2) The inefficient usage of computing resources of multi-processor systems when deploying software on ; (3) The pseudo-random exploration of the design space ; (4) The selection of the suitable implementation after performing the otimization process
A federated learning framework for the next-generation machine learning systems
Dissertação de mestrado em Engenharia Eletrónica Industrial e Computadores (especialização em Sistemas Embebidos e Computadores)The end of Moore's Law aligned with rising concerns about data privacy is forcing machine learning
(ML) to shift from the cloud to the deep edge, near to the data source. In the next-generation ML systems,
the inference and part of the training process will be performed right on the edge, while the cloud will be
responsible for major ML model updates. This new computing paradigm, referred to by academia and
industry researchers as federated learning, alleviates the cloud and network infrastructure while
increasing data privacy. Recent advances have made it possible to efficiently execute the inference pass
of quantized artificial neural networks on Arm Cortex-M and RISC-V (RV32IMCXpulp) microcontroller units
(MCUs). Nevertheless, the training is still confined to the cloud, imposing the transaction of high volumes
of private data over a network.
To tackle this issue, this MSc thesis makes the first attempt to run a decentralized training in Arm
Cortex-M MCUs. To port part of the training process to the deep edge is proposed L-SGD, a lightweight
version of the stochastic gradient descent optimized for maximum speed and minimal memory footprint
on Arm Cortex-M MCUs. The L-SGD is 16.35x faster than the TensorFlow solution while registering a
memory footprint reduction of 13.72%. This comes at the cost of a negligible accuracy drop of only 0.12%.
To merge local model updates returned by edge devices this MSc thesis proposes R-FedAvg, an
implementation of the FedAvg algorithm that reduces the impact of faulty model updates returned by
malicious devices.O fim da Lei de Moore aliado às crescentes preocupações sobre a privacidade dos dados gerou a
necessidade de migrar as aplicações de Machine Learning (ML) da cloud para o edge, perto da fonte de
dados. Na próxima geração de sistemas ML, a inferência e parte do processo de treino será realizada
diretamente no edge, enquanto que a cloud será responsável pelas principais atualizações do modelo
ML. Este novo paradigma informático, referido pelos investigadores académicos e industriais como treino
federativo, diminui a sobrecarga na cloud e na infraestrutura de rede, ao mesmo tempo que aumenta a
privacidade dos dados. Avanços recentes tornaram possÃvel a execução eficiente do processo de
inferência de redes neurais artificiais quantificadas em microcontroladores Arm Cortex-M e RISC-V
(RV32IMCXpulp). No entanto, o processo de treino continua confinado à cloud, impondo a transação de
grandes volumes de dados privados sobre uma rede.
Para abordar esta questão, esta dissertação faz a primeira tentativa de realizar um treino
descentralizado em microcontroladores Arm Cortex-M. Para migrar parte do processo de treino para o
edge é proposto o L-SGD, uma versão lightweight do tradicional método stochastic gradient descent
(SGD), otimizada para uma redução de latência do processo de treino e uma redução de recursos de
memória nos microcontroladores Arm Cortex-M. O L-SGD é 16,35x mais rápido do que a solução
disponibilizada pelo TensorFlow, ao mesmo tempo que regista uma redução de utilização de memória
de 13,72%. O custo desta abordagem é desprezÃvel, sendo a perda de accuracy do modelo de apenas
0,12%. Para fundir atualizações de modelos locais devolvidas por dispositivos do edge, é proposto o RFedAvg, uma implementação do algoritmo FedAvg que reduz o impacto de atualizações de modelos não
contributivos devolvidos por dispositivos maliciosos
Machine learning solutions for maintenance of power plants
The primary goal of this work is to present analysis of current market for predictive maintenance software solutions applicable to a generic coal/gas-fired thermal power plant, as well as to present a brief discussion on the related developments of the near future. This type of solutions is in essence an advanced condition monitoring technique, that is used to continuously monitor entire plants and detect sensor reading deviations via correlative calculations. This approach allows for malfunction forecasting well in advance to a malfunction itself and any possible unforeseen consequences.
Predictive maintenance software solutions employ primitive artificial intelligence in the form of machine learning (ML) algorithms to provide early detection of signal deviation. Before analyzing existing ML based solutions, structure and theory behind the processes of coal/gas driven power plants is going to be discussed to emphasize the necessity of predictive maintenance for optimal and reliable operation. Subjects to be discussed are: basic theory (thermodynamics and electrodynamics), primary machinery types, automation systems and data transmission, typical faults and condition monitoring techniques that are also often used in tandem with ML. Additionally, the basic theory on the main machine learning techniques related to malfunction prediction is going to be briefly presented
Intelligent Circuits and Systems
ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering
- …