162,120 research outputs found

    A review of TinyML

    Full text link
    In this current technological world, the application of machine learning is becoming ubiquitous. Incorporating machine learning algorithms on extremely low-power and inexpensive embedded devices at the edge level is now possible due to the combination of the Internet of Things (IoT) and edge computing. To estimate an outcome, traditional machine learning demands vast amounts of resources. The TinyML concept for embedded machine learning attempts to push such diversity from usual high-end approaches to low-end applications. TinyML is a rapidly expanding interdisciplinary topic at the convergence of machine learning, software, and hardware centered on deploying deep neural network models on embedded (micro-controller-driven) systems. TinyML will pave the way for novel edge-level services and applications that survive on distributed edge inferring and independent decision-making rather than server computation. In this paper, we explore TinyML's methodology, how TinyML can benefit a few specific industrial fields, its obstacles, and its future scope

    Introduction to the Selected Papers from ICCPS 2016

    Get PDF
    Since their inception more than a decade ago, terms such as “cyber-physical systems” (CPS) or “cooperating objects” have come to describe research and engineering efforts that tightly conjoin real-world physical processes and computing systems. The integration of physical processes and computing is not new; embedded computing systems have been in place for decades controlling physical processes. The revolution is steaming from the extensive networking of embedded computing devices and the holistic cyber-physical co-design that integrates sensing, actuation, computation, networking, and physical processes. Such systems pose many broad scientific and technical challenges, ranging from distributed programming paradigms to networking protocols, as well as systems theory that combines physical models and networked embedded systems. Notably, as the physical interactions imply that timing requirements are considered, real-time computing systems methodologies and technologies are also pivotal in many of those systems. Moreover, many of these systems are often safety-critical, and therefore it is fundamental to guarantee other nonfunctional properties (such as safety, security, and reliability), which often interplay among them and with timeliness requirements. CPS is a growing key strategic research, development, and innovation area, and it is becoming pivotal for boosting the development of the future generation of highly complex and automated computing systems, which will be pervasive in virtually all application domains. Notable examples are aeronautics, aerospace and defence systems, robotics, autonomous transportation systems, the Internet of Things, energy-aware and green computing, smart factory automation, smart grids, and advanced medical devices and applications. This special issue contains a selection of extended versions of the best papers presented at the Seventh ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS 2016), which was held with the Cyber-Physical Systems Week in Vienna, Austria, on 11–14 April 2016. This selection reflects effectively the growing pervasiveness of these systems in various applications domains. These papers excel at describing the diversity of methodologies used to design and verify various non-functional properties of these complex systems.info:eu-repo/semantics/publishedVersio

    New data structures, models, and algorithms for real-time resource management

    Get PDF
    Real-time resource management is the core and critical task in real-time systems. This dissertation explores new data structures, models, and algorithms for real-time resource management. At first, novel data structures, i.e., a class of Testing Interval Trees (TITs), are proposed to help build efficient scheduling modules in real-time systems. With a general data structure, i.e., the TIT* tree, the average costs of the schedulability tests in a wide variety of real-time systems can be reduced. With the Testing Interval Tree for Vacancy analysis (TIT-V), the complexities of the schedulability tests in a class of parallel/distributed real-time systems can be effectively reduced from 0(m²nlogn) to 0(mlogn+mlogm), where m is the number of processors and n is the number of tasks. Similarly, with the Testing Interval Tree for Release time and Laxity analysis (TIT-RL), the complexity of the online admission control in a uni-processor based real-time system can be reduced from 0(n²) to 0(nlogn), where n is the number of tasks. The TIT-RL tree can also be applied to a class of parallel/distributed real-time systems. Therefore, the TIT trees are effective approaches to efficient real-time scheduling modules. Secondly, a new utility accrual model, i.e., UAM+, is established for the resource management in real-time distributed systems. UAM+ is constructed based on the timeliness of computation and communication. Most importantly, the interplay between computation and communication is captured and characterized in the model. Under UAM+, resource managers are guided towards maximizing system-wide utility by exploring the interplay between computation and communication. This is in sharp contrast to traditional approaches that attempt to meet the timing constraints on computation and communication separately. To validate the effectiveness of UAM+, a resource allocation algorithm called IAUASA is developed. Simulation results reveal that IAUASA is far superior to two other resource allocation algorithms that are developed according to traditional utility accrual model and traditional idea. Furthermore, an online algorithm called IDRSA is also developed under UAM+, and a Dynamic Deadline Adjustment (DDA) technique is incorporated into IDRSA algorithm to explore the interplay between computation and communication. The simulation results show that the performance of IDRSA is very promising, especially when the interplay between computation and communication is tight. Therefore, the new utility accrual model provides a more effective approach to the resource allocation in distributed real-time systems. Thirdly, a general task model, which adapts the concept of calculus curve from the network calculus domain, is established for those embedded real-time systems with random event/task arrivals. Under this model, a prediction technique based on history window and calculus curves is established, and it provides the foundation for dynamic voltage-frequency scaling in those embedded real-time systems. Based on this prediction technique, novel energy-efficient algorithms that can dynamically adjust the operating voltage-frequency according to the predicted workload are developed. These algorithms aim to reduce energy consumption while meeting hard deadlines. They can accommodate and well adapt to the variation between the predicted and the actual arrivals of tasks as well as the variation between the predicted and the actual execution times of tasks. Simulation results validate the effectiveness of these algorithms in energy saving

    Analytic real-time analysis and timed automata: a hybrid methodology for the performance analysis of embedded real-time systems

    Get PDF
    This paper presents a compositional and hybrid approach for the performance analysis of distributed real-time systems. The developed methodology abstracts system components by either flow-oriented and purely analytic descriptions or by state-based models in the form of timed automata. The interaction among the heterogeneous components is modeled by streams of discrete events. In total this yields a hybrid framework for the compositional analysis of embedded systems. It supplements contemporary techniques for the following reasons: (a) state space explosion as intrinsic to formal verification is limited to the level of isolated components; (b) computed performance metrics such as buffer sizes, delays and utilization rates are not overly pessimistic, because coarse-grained analytic models are used only for components that conform to the stateless model of computation. For demonstrating the usefulness of the presented ideas, a corresponding tool-chain has been implemented. It is used to investigate the performance of a two-staged computing system, where one stage exhibits state-dependent behavior that is only coarsely coverable by a purely analytic and stateless component abstraction. Finally, experiments are performed to ascertain the scalability and the accuracy of the proposed approac

    PRISE: An Integrated Platform for Research and Teaching of Critical Embedded Systems

    Get PDF
    In this paper, we present PRISE, an integrated workbench for Research and Teaching of critical embedded systems at ISAE, the French Institute for Space and Aeronautics Engineering. PRISE is built around state-of-the-art technologies for the engineering of space and avionics systems used in Space and Avionics domain. It aims at demonstrating key aspects of critical, real-time, embedded systems used in the transport industry, but also validating new scientific contributions for the engineering of software functions. PRISE combines embedded and simulation platforms, and modeling tools. This platform is available for both research and teaching. Being built around widely used commercial and open source software; PRISE aims at being a reference platform for our teaching and research activities at ISAE

    An Adaptive Design Methodology for Reduction of Product Development Risk

    Full text link
    Embedded systems interaction with environment inherently complicates understanding of requirements and their correct implementation. However, product uncertainty is highest during early stages of development. Design verification is an essential step in the development of any system, especially for Embedded System. This paper introduces a novel adaptive design methodology, which incorporates step-wise prototyping and verification. With each adaptive step product-realization level is enhanced while decreasing the level of product uncertainty, thereby reducing the overall costs. The back-bone of this frame-work is the development of Domain Specific Operational (DOP) Model and the associated Verification Instrumentation for Test and Evaluation, developed based on the DOP model. Together they generate functionally valid test-sequence for carrying out prototype evaluation. With the help of a case study 'Multimode Detection Subsystem' the application of this method is sketched. The design methodologies can be compared by defining and computing a generic performance criterion like Average design-cycle Risk. For the case study, by computing Average design-cycle Risk, it is shown that the adaptive method reduces the product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure

    Efficient Embedded Speech Recognition for Very Large Vocabulary Mandarin Car-Navigation Systems

    Get PDF
    Automatic speech recognition (ASR) for a very large vocabulary of isolated words is a difficult task on a resource-limited embedded device. This paper presents a novel fast decoding algorithm for a Mandarin speech recognition system which can simultaneously process hundreds of thousands of items and maintain high recognition accuracy. The proposed algorithm constructs a semi-tree search network based on Mandarin pronunciation rules, to avoid duplicate syllable matching and save redundant memory. Based on a two-stage fixed-width beam-search baseline system, the algorithm employs a variable beam-width pruning strategy and a frame-synchronous word-level pruning strategy to significantly reduce recognition time. This algorithm is aimed at an in-car navigation system in China and simulated on a standard PC workstation. The experimental results show that the proposed method reduces recognition time by nearly 6-fold and memory size nearly 2- fold compared to the baseline system, and causes less than 1% accuracy degradation for a 200,000 word recognition task

    An Algebraic Framework for the Real-Time Solution of Inverse Problems on Embedded Systems

    Full text link
    This article presents a new approach to the real-time solution of inverse problems on embedded systems. The class of problems addressed corresponds to ordinary differential equations (ODEs) with generalized linear constraints, whereby the data from an array of sensors forms the forcing function. The solution of the equation is formulated as a least squares (LS) problem with linear constraints. The LS approach makes the method suitable for the explicit solution of inverse problems where the forcing function is perturbed by noise. The algebraic computation is partitioned into a initial preparatory step, which precomputes the matrices required for the run-time computation; and the cyclic run-time computation, which is repeated with each acquisition of sensor data. The cyclic computation consists of a single matrix-vector multiplication, in this manner computation complexity is known a-priori, fulfilling the definition of a real-time computation. Numerical testing of the new method is presented on perturbed as well as unperturbed problems; the results are compared with known analytic solutions and solutions acquired from state-of-the-art implicit solvers. The solution is implemented with model based design and uses only fundamental linear algebra; consequently, this approach supports automatic code generation for deployment on embedded systems. The targeting concept was tested via software- and processor-in-the-loop verification on two systems with different processor architectures. Finally, the method was tested on a laboratory prototype with real measurement data for the monitoring of flexible structures. The problem solved is: the real-time overconstrained reconstruction of a curve from measured gradients. Such systems are commonly encountered in the monitoring of structures and/or ground subsidence.Comment: 24 pages, journal articl
    • …
    corecore