1,130 research outputs found

    Machine Learning and Neural Networks for Real-Time Scheduling

    Get PDF
    This paper aims to serve as an efficient survey of the processes, problems, and methodologies surrounding the use of Neural Networks, specifically Hopfield-Type, in order to solve Hard-Real-Time Scheduling problems. Our primary goal is to demystify the field of Neural Networks research and properly describe the methods in which Real-Time scheduling problems may be approached when using neural networks. Furthermore, to give an introduction of sorts on this niche topic in a niche field. This survey is derived from four main papers, namely: “A Neurodynamic Approach for Real-Time Scheduling via Maximizing Piecewise Linear Utility” and “Scheduling Multiprocessor Job with Resource and Timing Constraints Using Neural Networks” . “Solving Real Time Scheduling Problems with Hopfield-type Neural Networks” and “Neural Networks for Multiprocessor Real-Time Scheduling

    Analysis Literatures of Machine Learning and Neural Networks for Real Time Scheduling

    Get PDF
    Real time scheduling problems are present in every aspect of software development. An optimized real time scheduling scheme would determine the performance of an operating system. There are many different approaches that real time scheduling researchers developed to tackle scheduling problems in many computer systems that have great important roles in keeping our modern society running smoothly. Neural-network real time scheduling is one of those approaches that can solve many computer scheduling problems. As computing technology advanced, more and more real time scheduling problems arise that need new solutions to keep up with the demand of faster computer systems. In this literature review, we analyze four research papers that promote some great solutions for some particular scheduling problems. The first one is “A Neurodynamic Approach for Real Time Scheduling via Maximizing Piecewise Linear utility” by Zhishan Gou and Sanjoy K. Baruah (2016). The second paper is “Scheduling Multiprocessor Job with Resource and Timing Constraints Using Neural Networks” by Y. Huang and R. Chen (1999). The third paper is “Solving Real Time Scheduling Problems Using Hopfield-Types Neural Networks” by M. Silva, C. Cardeira, and Z. Mammeri (1997). Finally, the last one is “Neural Network for Multiprocessor Real Time Scheduling” by C. Cardiera and Z. Mammeri (1994)

    On the design of multimedia architectures : proceedings of a one-day workshop, Eindhoven, December 18, 2003

    Get PDF

    On the design of multimedia architectures : proceedings of a one-day workshop, Eindhoven, December 18, 2003

    Get PDF

    On-Device Deep Learning Inference for System-on-Chip (SoC) Architectures

    Get PDF
    As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems

    Analyses and optimizations of timing-constrained embedded systems considering resource synchronization and machine learning approaches

    Get PDF
    Nowadays, embedded systems have become ubiquitous, powering a vast array of applications from consumer electronics to industrial automation. Concurrently, statistical and machine learning algorithms are being increasingly adopted across various application domains, such as medical diagnosis, autonomous driving, and environmental analysis, offering sophisticated data analysis and decision-making capabilities. As the demand for intelligent and time-sensitive applications continues to surge, accompanied by growing concerns regarding data privacy, the deployment of machine learning models on embedded devices has emerged as an indispensable requirement. However, this integration introduces both significant opportunities for performance enhancement and complex challenges in deployment optimization. On the one hand, deploying machine learning models on embedded systems with limited computational capacity, power budgets, and stringent timing requirements necessitates additional adjustments to ensure optimal performance and meet the imposed timing constraints. On the other hand, the inherent capabilities of machine learning, such as self-adaptation during runtime, prove invaluable in addressing challenges encountered in embedded systems, aiding in optimization and decision-making processes. This dissertation introduces two primary modifications for the analyses and optimizations of timing-constrained embedded systems. For one thing, it addresses the relatively long access times required for shared resources of machine learning tasks. For another, it considers the limited communication resources and data privacy concerns in distributed embedded systems when deploying machine learning models. Additionally, this work provides a use case that employs a machine learning method to tackle challenges specific to embedded systems. By addressing these key aspects, this dissertation contributes to the analysis and optimization of timing-constrained embedded systems, considering resource synchronization and machine learning models to enable improved performance and efficiency in real-time applications with stringent constraints

    Machine Learning and Neural Networks for Real-Time Scheduling

    Get PDF
    Using neural networks to find optimal solutions to real-time scheduling is a common technique, and there have been many different models put forth to accomplish this goal. This paper is an academic literature review of six different designs put forth that use neural networks for real-time scheduling. A comparison is done for these models which weighs the feasibility and time complexity for each one as well as identifying common themes and trends in this topic

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems
    • …
    corecore