357 research outputs found

    Sources of unbounded priority inversions in real-time systems and a comparative study of possible solutions

    Get PDF
    In the design of real-time systems, tasks are often assigned priorities. Preemptive priority driven schedulers are used to schedule tasks to meet the timing requirements. Priority inversion is the term used to describe the situation when a higher priority task's execution is delayed by lower priority tasks. Priority inversion can occur when there is contention for resources among tasks of different priorities. The duration of priority inversion could be long enough to cause tasks to miss their dead lines. Priority inversion cannot be completely eliminated. However, it is important to identify sources of priority inversion and minimize the duration of priority inversion. In this paper, a comprehensive review of the problem of and solutions to unbounded priority inversion is presented

    The java.util.concurrent synchronizer framework

    Get PDF
    AbstractMost synchronizers (locks, barriers, etc.) in the J2SE 5.0 java.util.concurrent package are constructed using a small framework based on class AbstractQueuedSynchronizer. This framework provides common mechanics for atomically managing synchronization state, blocking and unblocking threads, and queuing. The paper describes the rationale, design, implementation, usage, and performance of this framework

    Implementation of RTOS to the WSN node

    Get PDF
    Bezdrátové senzorické sieťe zväčša používajú `event-driven` operačné systémy. Táto práca diskutuje výhody nevýhody použitia RTOS v bezdrátových senzorických sieťach. Najvhodnejší RTOS je vybratý a sú podniknuté všetky kroky aby bolo možne demonštrovať schopnosť mikrokontrolérov Gecko od EnergyMicro prevádzkovať tento RTOS s nízkou spotrebou energie a demonštrovať jednoduchú bezdrátovú komunikáciu s Atmel AT86RF212 rádiami.Wireless sensors networks mostly use event-driven OSes. This works discusses pros and cons of using RTOS in wirless sensors networks. A most appropriate RTOS is chosen and all necessary steps are undergone to demonstrate EnergyMicro Gecko MCU's ability to run the RTOS with low energy consumption and demonstrate wireless simple communication with Atmel AT86RF212 radios.

    An open framework for highly concurrent hardware-in-the-loop simulation

    Get PDF
    Hardware-in-the-loop (HIL) simulation is becoming a significant tool in prototyping complex, highly available systems. The HIL approach allows an engineer to build a physical system incrementally by enabling real components of the system to seamlessly interface with simulated components. It also permits testing of hardware prototypes of components that would be extremely costly to test in the deployed environment. Key issues are the ability to wrap the systems of equations (such as Partial Differential Equations) describing the deployed environment into real-time software models, provide low synchronization overhead between the hardware and software, and reduce reliance on proprietary platforms. This thesis introduces an open source HIL simulation framework that can be ported to any standard Unix-like system on any shared-memory multiprocessor computer, requires minimal operating system scheduler controls, provides a soft real-time guarantee for any constituent simulation that does likewise, enables an asynchronous user interface, and allows for an arbitrary number of secondary control components --Abstract, page iii

    A Decentralized Scheduler for On-line Self-test Routines in Multi-core Automotive System-on-Chips

    Get PDF
    Modern System-on-Chips (SoCs) deployed for safety-critical applications typically embed one or more processing cores along with a variable number of peripherals. The compliance of such designs with functional safety standards is achieved by a combination of different techniques based on hardware redundancy and in-field test mechanisms. Among these, Software Test Libraries (STLs) are rapidly becoming adopted for testing the CPU and peripherals modules. The STL is usually composed of two sets of self-test procedures: boot-time and run-time tests. The former set is typically executed during the boot or power-on phase of the SoC since it requires full access to the available hardware (e.g., these programs need to manipulate the Interrupt Vector Table and to access the system RAM). The latter set instead, is designed to coexist with the user application and can be executed without requiring special constraints. When the STL is intended for testing the different cores within a multi-core SoC, the concurrent execution of the boot-time self-tests becomes an issue since this could lead to a longer power-up phase and excessive utilization of system resources. The main intent of this work is to present the architecture of a decentralized software scheduler, conceived for the concurrent execution of the STL on the available cores. The proposed solution considers the typical constraints of an STL in a multi-core scenario when deployed in field, namely minimum system resources usage (i.e., code and data memory). The effectiveness of the proposed scheduler was experimentally evaluated on an industrial STL developed for a multi-core SoC manufactured by STMicroelectronics

    Turning Futexes Inside-Out: Efficient and Deterministic User Space Synchronization Primitives for Real-Time Systems with IPCP

    Get PDF
    In Linux and other operating systems, futexes (fast user space mutexes) are the underlying synchronization primitives to implement POSIX synchronization mechanisms, such as blocking mutexes, condition variables, and semaphores. Futexes allow one to implement mutexes with excellent performance by avoiding system calls in the fast path. However, futexes are fundamentally limited to synchronization mechanisms that are expressible as atomic operations on 32-bit variables. At operating system kernel level, futex implementations require complex mechanisms to look up internal wait queues making them susceptible to determinism issues. In this paper, we present an alternative design for futexes by completely moving the complexity of wait queue management from the operating system kernel into user space, i. e. we turn futexes "inside out". The enabling mechanisms for "inside-out futexes" are an efficient implementation of the immediate priority ceiling protocol (IPCP) to achieve non-preemptive critical sections in user space, spinlocks for mutual exclusion, and interwoven services to suspend or wake up threads. The design allows us to implement common thread synchronization mechanisms in user space and to move determinism concerns out of the kernel while keeping the performance properties of futexes. The presented approach is suitable for multi-processor real-time systems with partitioned fixed-priority (P-FP) scheduling on each processor. We evaluate the approach with an implementation for mutexes and condition variables in a real-time operating system (RTOS). Experimental results on 32-bit ARM platforms show that the approach is feasible, and overheads are driven by low-level synchronization primitives

    Fault-free performance validation of fault-tolerant multiprocessors

    Get PDF
    A validation methodology for testing the performance of fault-tolerant computer systems was developed and applied to the Fault-Tolerant Multiprocessor (FTMP) at NASA-Langley's AIRLAB facility. This methodology was claimed to be general enough to apply to any ultrareliable computer system. The goal of this research was to extend the validation methodology and to demonstrate the robustness of the validation methodology by its more extensive application to NASA's Fault-Tolerant Multiprocessor System (FTMP) and to the Software Implemented Fault-Tolerance (SIFT) Computer System. Furthermore, the performance of these two multiprocessors was compared by conducting similar experiments. An analysis of the results shows high level language instruction execution times for both SIFT and FTMP were consistent and predictable, with SIFT having greater throughput. At the operating system level, FTMP consumes 60% of the throughput for its real-time dispatcher and 5% on fault-handling tasks. In contrast, SIFT consumes 16% of its throughput for the dispatcher, but consumes 66% in fault-handling software overhead

    Energy-efficient and high-performance lock speculation hardware for embedded multicore systems

    Full text link
    Embedded systems are becoming increasingly common in everyday life and like their general-purpose counterparts, they have shifted towards shared memory multicore architectures. However, they are much more resource constrained, and as they often run on batteries, energy efficiency becomes critically important. In such systems, achieving high concurrency is a key demand for delivering satisfactory performance at low energy cost. In order to achieve this high concurrency, consistency across the shared memory hierarchy must be accomplished in a cost-effective manner in terms of performance, energy, and implementation complexity. In this article, we propose Embedded-Spec, a hardware solution for supporting transparent lock speculation, without the requirement for special supporting instructions. Using this approach, we evaluate the energy consumption and performance of a suite of benchmarks, exploring a range of contention management and retry policies. We conclude that for resource-constrained platforms, lock speculation can provide real benefits in terms of improved concurrency and energy efficiency, as long as the underlying hardware support is carefully configured.This work is supported in part by NSF under Grants CCF-0903384, CCF-0903295, CNS-1319495, and CNS-1319095 as well the Semiconductor Research Corporation under grant number 1983.001. (CCF-0903384 - NSF; CCF-0903295 - NSF; CNS-1319495 - NSF; CNS-1319095 - NSF; 1983.001 - Semiconductor Research Corporation

    Optimizing message-passing performance within symmetric multiprocessor systems

    Get PDF
    The Message Passing Interface (MPI) has been widely used in the area of parallel computing due to its portability, scalability, and ease of use. Message passing within Symmetric Multiprocessor (SMP) systems is an import part of any MPI library since it enables parallel programs to run efficiently on SMP systems, or clusters of SMP systems when combined with other ways of communication such as TCP/IP. Most message-passing implementations use a shared memory pool as an intermediate buffer to hold messages, some lock mechanisms to protect the pool, and some synchronization mechanism for coordinating the processes. However, the performance varies significantly depending on how these are implemented. The work here implements two SMP message-passing modules using lock-based and lock-free approaches for MPLi̲te, a compact library that implements a subset of the most commonly used MPI functions. Various optimization techniques have been used to optimize the performance. These two modules are evaluated using a communication performance analysis tool called NetPIPE, and compared with the implementations of other MPI libraries such as MPICH, MPICH2, LAM/MPI and MPI/PRO. Performance tools such as PAPI and VTune are used to gather some runtime information at the hardware level. This information together with some cache theory and the hardware configuration is used to explain various performance phenomena. Tests using a real application have shown the performance of the different implementations in real practice. These results all show that the improvements of the new techniques over existing implementations
    corecore