23 research outputs found

    MATLAB

    Get PDF
    A well-known statement says that the PID controller is the "bread and butter" of the control engineer. This is indeed true, from a scientific standpoint. However, nowadays, in the era of computer science, when the paper and pencil have been replaced by the keyboard and the display of computers, one may equally say that MATLAB is the "bread" in the above statement. MATLAB has became a de facto tool for the modern system engineer. This book is written for both engineering students, as well as for practicing engineers. The wide range of applications in which MATLAB is the working framework, shows that it is a powerful, comprehensive and easy-to-use environment for performing technical computations. The book includes various excellent applications in which MATLAB is employed: from pure algebraic computations to data acquisition in real-life experiments, from control strategies to image processing algorithms, from graphical user interface design for educational purposes to Simulink embedded systems

    Applications in Electronics Pervading Industry, Environment and Society

    Get PDF
    This book features the manuscripts accepted for the Special Issue “Applications in Electronics Pervading Industry, Environment and Society—Sensing Systems and Pervasive Intelligence” of the MDPI journal Sensors. Most of the papers come from a selection of the best papers of the 2019 edition of the “Applications in Electronics Pervading Industry, Environment and Society” (APPLEPIES) Conference, which was held in November 2019. All these papers have been significantly enhanced with novel experimental results. The papers give an overview of the trends in research and development activities concerning the pervasive application of electronics in industry, the environment, and society. The focus of these papers is on cyber physical systems (CPS), with research proposals for new sensor acquisition and ADC (analog to digital converter) methods, high-speed communication systems, cybersecurity, big data management, and data processing including emerging machine learning techniques. Physical implementation aspects are discussed as well as the trade-off found between functional performance and hardware/system costs

    Tiny Machine Learning Environment: Enabling Intelligence on Constrained Devices

    Get PDF
    Running machine learning algorithms (ML) on constrained devices at the extreme edge of the network is problematic due to the computational overhead of ML algorithms, available resources on the embedded platform, and application budget (i.e., real-time requirements, power constraints, etc.). This required the development of specific solutions and development tools for what is now referred to as TinyML. In this dissertation, we focus on improving the deployment and performance of TinyML applications, taking into consideration the aforementioned challenges, especially memory requirements. This dissertation contributed to the construction of the Edge Learning Machine environment (ELM), a platform-independent open-source framework that provides three main TinyML services, namely shallow ML, self-supervised ML, and binary deep learning on constrained devices. In this context, this work includes the following steps, which are reflected in the thesis structure. First, we present the performance analysis of state-of-the-art shallow ML algorithms including dense neural networks, implemented on mainstream microcontrollers. The comprehensive analysis in terms of algorithms, hardware platforms, datasets, preprocessing techniques, and configurations shows similar performance results compared to a desktop machine and highlights the impact of these factors on overall performance. Second, despite the assumption that TinyML only permits models inference provided by the scarcity of resources, we have gone a step further and enabled self-supervised on-device training on microcontrollers and tiny IoT devices by developing the Autonomous Edge Pipeline (AEP) system. AEP achieves comparable accuracy compared to the typical TinyML paradigm, i.e., models trained on resource-abundant devices and then deployed on microcontrollers. Next, we present the development of a memory allocation strategy for convolutional neural networks (CNNs) layers, that optimizes memory requirements. This approach reduces the memory footprint without affecting accuracy nor latency. Moreover, e-skin systems share the main requirements of the TinyML fields: enabling intelligence with low memory, low power consumption, and low latency. Therefore, we designed an efficient Tiny CNN architecture for e-skin applications. The architecture leverages the memory allocation strategy presented earlier and provides better performance than existing solutions. A major contribution of the thesis is given by CBin-NN, a library of functions for implementing extremely efficient binary neural networks on constrained devices. The library outperforms state of the art NN deployment solutions by drastically reducing memory footprint and inference latency. All the solutions proposed in this thesis have been implemented on representative devices and tested in relevant applications, of which results are reported and discussed. The ELM framework is open source, and this work is clearly becoming a useful, versatile toolkit for the IoT and TinyML research and development community

    Investigating the use of ray tracing for signal-level radar simulation in space monitoring applications: a comparison of radio propagation models

    Get PDF
    This thesis presents the design and development of an accelerated signal-level radar simulator with an emphasis on space debris monitoring in the Low Earth Orbit. Space surveillance represents a major topic of concern to astronomers as the threat of space debris and orbital overpopulation looms – particularly due to the lack of effective mitigation techniques and the limitations of modern space-monitoring sensors. This work thus aimed to investigate and design possible tools that could be used for training, testing and research purposes, and thereby aid further study in the field. At present, there exist no three-dimensional, ray-traced, signal-level radar simulators available for public use. As such, this thesis proposes an open-source, ray-traced radar simulator that models the interactions between spaceborne targets and terrestrial radar systems. This utilises a ray-tracing algorithm to simulate the effects of debris size, shape, orientation, and material properties when computing radar signals in a typical simulation. The generated received signals, produced at the output of the simulator, were also verified against systems theory, and validated with an existing, well-established simulator. The developed software was designed to aid astronomers and researchers in space situational awareness applications through the simulation of radar designs for orbital surveillance experiments. Due to its open-source nature, it is also expected to be used in training and research environments involving the testing of space-monitoring systems under various simulation conditions. The software offers native support for measured Two-Line Element datasets and the Simplified General Perturbations #4 orbit propagation model, enabling the accurate modelling of targets and the dynamic orbital forces acting upon them. As a result, the software has aptly been named the Space Object Astrodynamics and Radar Simulator – or SOARS. SOARS was built upon the foundations of a general-purpose radar simulator known as the Flexible Extensible Radar Simulator – or FERS – which provided integrated radar models for propagation loss, antenna shapes, Doppler and phase shifts, Radar Cross Section modelling, pulse waveforms, high-accuracy clock mechanisms, and interpolation algorithms. While FERS lacked various features required for space-monitoring applications, many of its implementations were used in SOARS to minimise simulation limits and maximise signal rendering accuracy by supporting an arbitrary number of transmitters, receivers, and targets. The goal was thus to have the simulator limited only by the end-user's system, and to specialise the operation of the software towards space surveillance by integrating additional features – such as built-in models for environmental and system noise, multiscatter effects, and target modelling using meshes comprised of triangular primitives. After completing the software's development, the ray-traced simulator was compared against a more streamlined version of SOARS that made use of point-model approximations for quick-look simulations, and the trade-offs between both simulators (including software runtime, memory utilisation and simulation accuracy) were investigated and evaluated. This assessed the value of implementing ray tracing in a radar simulator operating primarily within space contexts and evaluated the results of both simulators using detection processing as a demonstrated application of the system. And while the use of ray tracing resulted in significant costs in speed and memory, the investigation found that the ray-traced simulator generated more reliable results relative to the point-model version – providing various advantages in test scenarios involving shadowing and multiscatter. The design of the SOARS software, as well as its point-model “baseline” alternative and the investigation into each simulator's advantages and disadvantages, are thus presented in this thesis. The developed programs were released as open-source tools under the GNU General Public Licence and are freely available for public use, modification, and distribution

    Tiny Machine Learning Environment: Enabling Intelligence on Constrained Devices

    Get PDF
    Running machine learning algorithms (ML) on constrained devices at the extreme edge of the network is problematic due to the computational overhead of ML algorithms, available resources on the embedded platform, and application budget (i.e., real-time requirements, power constraints, etc.). This required the development of specific solutions and development tools for what is now referred to as TinyML. In this dissertation, we focus on improving the deployment and performance of TinyML applications, taking into consideration the aforementioned challenges, especially memory requirements. This dissertation contributed to the construction of the Edge Learning Machine environment (ELM), a platform-independent open source framework that provides three main TinyML services, namely shallow ML, self-supervised ML, and binary deep learning on constrained devices. In this context, this work includes the following steps, which are reflected in the thesis structure. First, we present the performance analysis of state of the art shallow ML algorithms including dense neural networks, implemented on mainstream microcontrollers. The comprehensive analysis in terms of algorithms, hardware platforms, datasets, pre-processing techniques, and configurations shows similar performance results compared to a desktop machine and highlights the impact of these factors on overall performance. Second, despite the assumption that TinyML only permits models inference provided by the scarcity of resources, we have gone a step further and enabled self-supervised on-device training on microcontrollers and tiny IoT devices by developing the Autonomous Edge Pipeline (AEP) system. AEP achieves comparable accuracy compared to the typical TinyML paradigm, i.e., models trained on resource-abundant devices and then deployed on microcontrollers. Next, we present the development of a memory allocation strategy for convolutional neural networks (CNNs) layers, that optimizes memory requirements. This approach reduces the memory footprint without affecting accuracy nor latency. Moreover, e-skin systems share the main requirements of the TinyML fields: enabling intelligence with low memory, low power consumption, and low latency. Therefore, we designed an efficient Tiny CNN architecture for e-skin applications. The architecture leverages the memory allocation strategy presented earlier and provides better performance than existing solutions. A major contribution of the thesis is given by CBin-NN, a library of functions for implementing extremely efficient binary neural networks on constrained devices. The library outperforms state of the art NN deployment solutions by drastically reducing memory footprint and inference latency. All the solutions proposed in this thesis have been implemented on representative devices and tested in relevant applications, of which results are reported and discussed. The ELM framework is open source, and this work is clearly becoming a useful, versatile toolkit for the IoT and TinyML research and development community

    Supercomputing Frontiers

    Get PDF
    This open access book constitutes the refereed proceedings of the 6th Asian Supercomputing Conference, SCFA 2020, which was planned to be held in February 2020, but unfortunately, the physical conference was cancelled due to the COVID-19 pandemic. The 8 full papers presented in this book were carefully reviewed and selected from 22 submissions. They cover a range of topics including file systems, memory hierarchy, HPC cloud platform, container image configuration workflow, large-scale applications, and scheduling

    The SAR Handbook: Comprehensive Methodologies for Forest Monitoring and Biomass Estimation

    Get PDF
    This Synthetic Aperture Radar (SAR) handbook of applied methods for forest monitoring and biomass estimation has been developed by SERVIR in collaboration with SilvaCarbon to address pressing needs in the development of operational forest monitoring services. Despite the existence of SAR technology with all-weather capability for over 30 years, the applied use of this technology for operational purposes has proven difficult. This handbook seeks to provide understandable, easy-to-assimilate technical material to remote sensing specialists that may not have expertise on SAR but are interested in leveraging SAR technology in the forestry sector

    Techniques and Data Structures to Extend Database Management Systems into Genomics Platforms

    Get PDF
    The recent coronavirus pandemic has shown that there is a great need for data velocity and collaboration between institutions and stakeholders regarding the sequencing and identification of variants in organisms. This data collaboration is hindered by the disparate nature of data schemes, metadata recording methods and pipelines between labs. This could be solved if there was an easy way to share and query data using methods and technologies that are common to most people involved in this field. This thesis aims to provide a guideline on how to adapt an off the shelf database system into a genomics platform. Leaning on the concept of data and processing co-location, we propose a list of requirements and a prototype implementation of said requirements in the scope of a next generation sequencing (NGS) pipeline. The data and the processes involved can be easily queried and invoked using a commonly known language such as SQL. Our implementation builds bio-data types and user-defined indexes to develop NGS related algorithmic logic inside a database system. We then leverage these algorithms to build a complete sequencing pipeline - from data loading to consensus sequence generation and variant identification. We also assess each stage of the pipeline to show how effective our methods are compared to existing command line tools

    Analysis and Mitigation of Remote Side-Channel and Fault Attacks on the Electrical Level

    Get PDF
    In der fortlaufenden Miniaturisierung von integrierten Schaltungen werden physikalische Grenzen erreicht, wobei beispielsweise Einzelatomtransistoren eine mögliche untere Grenze für Strukturgrößen darstellen. Zudem ist die Herstellung der neuesten Generationen von Mikrochips heutzutage finanziell nur noch von großen, multinationalen Unternehmen zu stemmen. Aufgrund dieser Entwicklung ist Miniaturisierung nicht länger die treibende Kraft um die Leistung von elektronischen Komponenten weiter zu erhöhen. Stattdessen werden klassische Computerarchitekturen mit generischen Prozessoren weiterentwickelt zu heterogenen Systemen mit hoher Parallelität und speziellen Beschleunigern. Allerdings wird in diesen heterogenen Systemen auch der Schutz von privaten Daten gegen Angreifer zunehmend schwieriger. Neue Arten von Hardware-Komponenten, neue Arten von Anwendungen und eine allgemein erhöhte Komplexität sind einige der Faktoren, die die Sicherheit in solchen Systemen zur Herausforderung machen. Kryptografische Algorithmen sind oftmals nur unter bestimmten Annahmen über den Angreifer wirklich sicher. Es wird zum Beispiel oft angenommen, dass der Angreifer nur auf Eingaben und Ausgaben eines Moduls zugreifen kann, während interne Signale und Zwischenwerte verborgen sind. In echten Implementierungen zeigen jedoch Angriffe über Seitenkanäle und Faults die Grenzen dieses sogenannten Black-Box-Modells auf. Während bei Seitenkanalangriffen der Angreifer datenabhängige Messgrößen wie Stromverbrauch oder elektromagnetische Strahlung ausnutzt, wird bei Fault Angriffen aktiv in die Berechnungen eingegriffen, und die falschen Ausgabewerte zum Finden der geheimen Daten verwendet. Diese Art von Angriffen auf Implementierungen wurde ursprünglich nur im Kontext eines lokalen Angreifers mit Zugriff auf das Zielgerät behandelt. Jedoch haben bereits Angriffe, die auf der Messung der Zeit für bestimmte Speicherzugriffe basieren, gezeigt, dass die Bedrohung auch durch Angreifer mit Fernzugriff besteht. In dieser Arbeit wird die Bedrohung durch Seitenkanal- und Fault-Angriffe über Fernzugriff behandelt, welche eng mit der Entwicklung zu mehr heterogenen Systemen verknüpft sind. Ein Beispiel für neuartige Hardware im heterogenen Rechnen sind Field-Programmable Gate Arrays (FPGAs), mit welchen sich fast beliebige Schaltungen in programmierbarer Logik realisieren lassen. Diese Logik-Chips werden bereits jetzt als Beschleuniger sowohl in der Cloud als auch in Endgeräten eingesetzt. Allerdings wurde gezeigt, wie die Flexibilität dieser Beschleuniger zur Implementierung von Sensoren zur Abschätzung der Versorgungsspannung ausgenutzt werden kann. Zudem können durch eine spezielle Art der Aktivierung von großen Mengen an Logik Berechnungen in anderen Schaltungen für Fault Angriffe gestört werden. Diese Bedrohung wird hier beispielsweise durch die Erweiterung bestehender Angriffe weiter analysiert und es werden Strategien zur Absicherung dagegen entwickelt
    corecore