1,267 research outputs found

    Effective And Efficient Preemption Placement For Cache Overhead Minimization In Hard Real-Time Systems

    Get PDF
    Schedulability analysis for real-time systems has been the subject of prominent research over the past several decades. One of the key foundations of schedulability analysis is an accurate worst case execution time (WCET) for each task. In preemption based real-time systems, the CRPD can represent a significant component (up to 44% as documented in research literature) of variability to overall task WCET. Several methods have been employed to calculate CRPD with significant levels of pessimism that may result in a task set erroneously declared as non-schedulable. Furthermore, they do not take into account that CRPD cost is inherently a function of where preemptions actually occur. Our approach for computing CRPD via loaded cache blocks (LCBs) is more accurate in the sense that cache state reflects which cache blocks and the specific program locations where they are reloaded. Limited preemption models attempt to minimize preemption overhead (CRPD) by reducing the number of allowed preemptions and/or allowing preemption at program locations where the CRPD effect is minimized. These algorithms rely heavily on accurate CRPD measurements or estimation models in order to identify an optimal set of preemption points. Our approach improves the effectiveness of limited optimal preemption point placement algorithms by calculating the LCBs for each pair of adjacent preemptions to more accurately model task WCET and maximize schedulability as compared to existing preemption point placement approaches. We utilize dynamic programming technique to develop an optimal preemption point placement algorithm. Lastly, we will demonstrate, using a case study, improved task set schedulability and optimal preemption point placement via our new LCB characterization. We propose a new CRPD metric, called loaded cache blocks (LCB) which accurately characterizes the CRPD a real-time task may be subjected to due to the preemptive execution of higher priority tasks. We show how to integrate our new LCB metric into our newly developed algorithms that automatically place preemption points supporting linear control flow graphs (CFGs) for limited preemption scheduling applications. We extend the derivation of loaded cache blocks (LCB), that was proposed for linear control flow graphs (CFGs) to conditional CFGs. We show how to integrate our revised LCB metric into our newly developed algorithms that automatically place preemption points supporting conditional control flow graphs (CFGs) for limited preemption scheduling applications. For future work, we will verify the correctness of our framework through other measurable physical and hardware constraints. Also, we plan to complete our work on developing a generalized framework that can be seamlessly integrated into real-time schedulability analysis

    Explicit Preemption Placement For Real-Time Conditional Code Via Graph Grammars And Dynamic Programming

    Get PDF
    Traditional worst-case execution time (WCET) analysis must make very pessimistic assumptions regarding the cost of preemptions for a real-time job. For every potential preemption point, the analysis must add to the WCET of a job the cache-related preemption delay (CRPD) incurred due to the contention for memory resources with other jobs in the system. However, recent work has shown that CRPD can vary at each preemption point (due to the cache lines that must be reloaded for subsequent code after the preemption). Using this observation and information obtained from schedulability analysis on the maximum length of the non-preemptive region of a job, we seek to find the optimal set of explicit preemption-points (EPPs) that minimize the WCET and ensure system schedulability. Utilizing graph grammars and dynamic programming, we develop a pseudo-polynomial-time algorithm that is capable of analyzing jobs that can be represented by control flowgraphs with arbitrarily-nested conditional structures. This algorithm extends previous work that could only handle sequential flowgraphs. Exhaustive experiments are included to show that the proposed approach is able to significantly improve the bounds on the worst-case execution times of limited preemptive tasks

    Response-time analysis for fixed-priority systems with a write-back cache

    Get PDF
    This paper introduces analyses of write-back caches integrated into response-time analysis for fixed-priority preemptive and non-preemptive scheduling. For each scheduling paradigm, we derive four different approaches to computing the additional costs incurred due to write backs. We show the dominance relationships between these different approaches and note how they can be combined to form a single state-of-the-art approach in each case. The evaluation explores the relative performance of the different methods using a set of benchmarks, as well as making comparisons with no cache and a write-through cache. We also explore the effect of write buffers used to hide the latency of write-through caches. We show that depending upon the depth of the buffer used and the policies employed, such buffers can result in domino effects. Our evaluation shows that even ignoring domino effects, a substantial write buffer is needed to match the guaranteed performance of write-back caches

    Real-Time Wireless Sensor-Actuator Networks for Cyber-Physical Systems

    Get PDF
    A cyber-physical system (CPS) employs tight integration of, and coordination between computational, networking, and physical elements. Wireless sensor-actuator networks provide a new communication technology for a broad range of CPS applications such as process control, smart manufacturing, and data center management. Sensing and control in these systems need to meet stringent real-time performance requirements on communication latency in challenging environments. There have been limited results on real-time scheduling theory for wireless sensor-actuator networks. Real-time transmission scheduling and analysis for wireless sensor-actuator networks requires new methodologies to deal with unique characteristics of wireless communication. Furthermore, the performance of a wireless control involves intricate interactions between real-time communication and control. This thesis research tackles these challenges and make a series of contributions to the theory and system for wireless CPS. (1) We establish a new real-time scheduling theory for wireless sensor-actuator networks. (2) We develop a scheduling-control co-design approach for holistic optimization of control performance in a wireless control system. (3) We design and implement a wireless sensor-actuator network for CPS in data center power management. (4) We expand our research to develop scheduling algorithms and analyses for real-time parallel computing to support computation-intensive CPS

    Bounding Worst-Case Response Times of Tasks under PIP

    Get PDF
    Schedulability theory in real-time systems requires prior knowledge of the worst-case execution time (WCET) of every task in the system. One method to determine the WCET is known as static timing analysis. Determination of the priorities among tasks in such a system requires a scheduling policy, which could be either preemptive or nonpreemptive. While static timing analysis and data cache analysis are simplified by using a fully non-preemptive scheduling policy, it results in decreased schedulability. In prior work, a methodology was proposed to bound the data-cache related delay for real-time tasks that, beside having a non-preemptive region (critical section), can otherwise be scheduled preemptively. While the prior approach improves schedulability in comparison to fully non-preemptive methods, it is still conservative in its approach due to its fundamental assumption that a task executing in a critical section may not be preempted by any other task. In this paper, we propose a methodology that incorporates resource sharing policies such as the Priority Inheritance Protocol (PIP) into the calculation of data-cache related delay. In this approach, access to shared resources, which is the primary reason for critical sections within tasks, is controlled by the resource sharing policy. In addition to maintaining correctness of access, such policies strive to limit resource access conflicts, thereby improving the responsiveness of tasks. To the best of our knowledge, this is the first framework that integrates data-cache related delay calculations with resource sharing policies in the context of real-time systems

    Improving Routing Efficiency, Fairness, Differentiated Servises And Throughput In Optical Networks

    Get PDF
    Wavelength division multiplexed (WDM) optical networks are rapidly becoming the technology of choice in next-generation Internet architectures. This dissertation addresses the important issues of improving four aspects of optical networks, namely, routing efficiency, fairness, differentiated quality of service (QoS) and throughput. A new approach for implementing efficient routing and wavelength assignment in WDM networks is proposed and evaluated. In this approach, the state of a multiple-fiber link is represented by a compact bitmap computed as the logical union of the bitmaps of the free wavelengths in the fibers of this link. A modified Dijkstra\u27s shortest path algorithm and a wavelength assignment algorithm are developed using fast logical operations on the bitmap representation. In optical burst switched (OBS) networks, the burst dropping probability increases as the number of hops in the lightpath of the burst increases. Two schemes are proposed and evaluated to alleviate this unfairness. The two schemes have simple logic, and alleviate the beat-down unfairness problem without negatively impacting the overall throughput of the system. Two similar schemes to provide differentiated services in OBS networks are introduced. A new scheme to improve the fairness of OBS networks based on burst preemption is presented. The scheme uses carefully designed constraints to avoid excessive wasted channel reservations, reduce cascaded useless preemptions, and maintain healthy throughput levels. A new scheme to improve the throughput of OBS networks based on burst preemption is presented. An analytical model is developed to compute the throughput of the network for the special case when the network has a ring topology and the preemption weight is based solely on burst size. The analytical model is quite accurate and gives results close to those obtained by simulation. Finally, a preemption-based scheme for the concurrent improvement of throughput and burst fairness in OBS networks is proposed and evaluated. The scheme uses a preemption weight consisting of two terms: the first term is a function of the size of the burst and the second term is the product of the hop count times the length of the lightpath of the burst

    Bundle: Taming The Cache And Improving Schedulability Of Multi-Threaded Hard Real-Time Systems

    Get PDF
    For hard real-time systems, schedulability of a task set is paramount. If a task set is not deemed schedulable under all conditions, the system may fail during operation and cannot be deployed in a high risk environment. Schedulability testing has typically been separated from worst-case execution time (WCET) analysis. Each task’s WCET value is calculated independently and provided as input to a schedulability test. However, a task’s WCET value is influenced by scheduling decisions and the impact of cache memory. Thus, schedulability tests have been augmented to include cache-related preemption delay (CRPD). From this classical perspective, the effect of cache memory on WCET and schedulability is always negative; increasing execution times and demand. In this work we propose a new positive perspective, where cache memory benefits multi-threaded tasks by scheduling threads in a manner that shares values predictably. This positive perspective is reached by integrating, rather than separating the disciplines of schedulability analysis and worst-case execution time. These integrated techniques are referred to as the BUNDLE family of worst-case execution time and cache overhead (WCETO) analysis and scheduling algorithm. WCETO calculation divides the task’s structure into conflict free regions and calculates a bound utilizing explicit understanding of the thread-level scheduling algorithm. Conflict free regions are utilized by the scheduling algorithm, which associates with each region a thread container called a bundle. At any time only one bundle may be active, and only threads of the active bundle may execute on the processor. The BUNDLE family of scheduling algorithms developed in this work increase in scope from BUNDLE through ITCB-DAG. As the fundamental contribution, BUNDLE and BUNDLEP apply to a single multi-threaded task running on a uniprocessor architecture with a single level direct mapped instruction cache. NPM-BUNDLE expands the positive perspective to multiple tasks on a uniprocessor system. With ITCB-DAG bringing BUNDLE’s analysis and scheduling techniques to multi-processor systems. Each of the scheduling algorithms require a novel hardware mechanism to anticipate execution and make scheduling decisions. To support anticipation of execution, a novel XFLICT interrupt is proposed. It is a simple mechanism that emulates the behavior of hardware breakpoints. An implementation of the BUNDLEP analytical techniques, scheduling algorithm, and XFLICT interrupt is available as a simulated platform for further research and extension. Future work is planned to expand BUNDLE’s positive perspective and increase adoption. The most significant barrier to adoption is the ability to deploy BUNDLE’s scheduling algorithm, this mandates a viable and available hardware or software mechanism to anticipate execution. NPM-BUNDLE is limited to non-preemptive multi-task scheduling and analysis, support for preemptive scheduling will increase the positive impact of BUNDLE’s integrated perspective

    Performance evaluation of HEVC RCL applications mapped onto NoC-based embedded platforms

    Get PDF
    Today, several applications running into embedded systems have to fulfill soft or hard timing constraints. Video applications, like the modern High Efficiency Video Coding (HEVC), e.g., most often have soft real-time constraints. However, in specific scenarios, such as in robotic surgeries, the coupling of satellites and so on, harder timing constraints arise, becoming a huge challenge. Although the implementation of such applications in Networks-on-Chip (NoCs) being an alternative to reduce their algorithmic complexity and meet real-time constraints, a performance evaluation of the mapped NoC and the schedulability analysis for a given application are mandatory. In this work we make a performance evaluation of HEVC Residual Coding Loop (RCL) mapped onto a NoC-based embedded platform, considering the encoding of a single 1920x1080 pixels frame. A set of analysis exploring the combination of different NoC sizes and task mapping strategies were performed, showing for the typical and upper-bound workload cases scenarios when the application is schedulable and meets the real-time constraints

    A graph based process model measurement framework using scheduling theory

    Get PDF
    Software development processes, as a means of ensuring software quality and productivity, have been widely accepted within the software development community; software process modeling, on the other hand, continues to be a subject of interest in the research community. Even with organizations that have achieved higher SEI maturity levels, processes are by and large described in documents and reinforced as guidelines or laws governing software development activities. The lack of industry-wide adaptation of software process modeling as part of development activities can be attributed to two major reasons: lack of forecast power in the (software) process modeling and lack of integration mechanism for the described process to seamlessly interact with daily development activities. This dissertation describes a research through which a framework has been established where processes can be manipulated, measured, and dynamically modified by interacting with project management techniques and activities in an integrated process modeling environment, thus closing the gap between process modeling and software development. In this research, processes are described using directed graphs, similar to the techniques with CPM. This way, the graphs can be manipulated visually while the properties of the graphs-can be used to check their validity. The partial ordering and the precedence relationship of the tasks in the graphs are similar to the one studied in other researches [Delcambre94] [Mills96]. Measurements of the effectiveness of the processes are added in this research. These measurements provide bases for the judgment when manipulating the graphs to produce or modify a process. Software development can be considered as activities related to three sets: a set of tasks (τ), a set of resources (ρ), and a set of constraints (y). The process, P, is then a function of all the sets interacting with each other: P = {τ, ρ, y). The interactions of these sets can be described in terms of different machine models using scheduling theory. While trying to produce an optimal solution satisfying a set of prescribed conditions using the analytical method would lead to a practically non-feasible formulation, many heuristic algorithms in scheduling theory combined with manual manipulation of the tasks can help to produce a reasonable good process, the effectiveness of which is reflected through a set of measurement criteria, in particular, the make-span, the float, and the bottlenecks. Through an integrated process modeling environment, these measurements can be obtained in real time, thus providing a feedback loop during the process execution. This feedback loop is essential for risk management and control
    corecore