584 research outputs found

    Temporal analysis and scheduling of hard real-time radios running on a multi-processor

    Get PDF
    On a multi-radio baseband system, multiple independent transceivers must share the resources of a multi-processor, while meeting each its own hard real-time requirements. Not all possible combinations of transceivers are known at compile time, so a solution must be found that either allows for independent timing analysis or relies on runtime timing analysis. This thesis proposes a design flow and software architecture that meets these challenges, while enabling features such as independent transceiver compilation and dynamic loading, and taking into account other challenges such as ease of programming, efficiency, and ease of validation. We take data flow as the basic model of computation, as it fits the application domain, and several static variants (such as Single-Rate, Multi-Rate and Cyclo-Static) have been shown to possess strong analytical properties. Traditional temporal analysis of data flow can provide minimum throughput guarantees for a self-timed implementation of data flow. Since transceivers may need to guarantee strictly periodic execution and meet latency requirements, we extend the analysis techniques to show that we can enforce strict periodicity for an actor in the graph; we also provide maximum latency analysis techniques for periodic, sporadic and bursty sources. We propose a scheduling strategy and an automatic scheduling flow that enable the simultaneous execution of multiple transceivers with hard-realtime requirements, described as Single-Rate Data Flow (SRDF) graphs. Each transceiver has its own execution rate and starts and stops independently from other transceivers, at times unknown at compile time, on a multiprocessor. We show how to combine scheduling and mapping decisions with the input application data flow graph to generate a worst-case temporal analysis graph. We propose algorithms to find a mapping per transceiver in the form of clusters of statically-ordered actors, and a budget for either a Time Division Multiplex (TDM) or Non-Preemptive Non-Blocking Round Robin (NPNBRR) scheduler per cluster per transceiver. The budget is computed such that if the platform can provide it, then the desired minimum throughput and maximum latency of the transceiver are guaranteed, while minimizing the required processing resources. We illustrate the use of these techniques to map a combination of WLAN and TDS-CDMA receivers onto a prototype Software-Defined Radio platform. The functionality of transceivers for standards with very dynamic behavior – such as WLAN – cannot be conveniently modeled as an SRDF graph, since SRDF is not capable of expressing variations of actor firing rules depending on the values of input data. Because of this, we propose a restricted, customized data flow model of computation, Mode-Controlled Data Flow (MCDF), that can capture the data-value dependent behavior of a transceiver, while allowing rigorous temporal analysis, and tight resource budgeting. We develop a number of analysis techniques to characterize the temporal behavior of MCDF graphs, in terms of maximum latencies and throughput. We also provide an extension to MCDF of our scheduling strategy for SRDF. The capabilities of MCDF are then illustrated with a WLAN 802.11a receiver model. Having computed budgets for each transceiver, we propose a way to use these budgets for run-time resource mapping and admissibility analysis. During run-time, at transceiver start time, the budget for each cluster of statically-ordered actors is allocated by a resource manager to platform resources. The resource manager enforces strict admission control, to restrict transceivers from interfering with each other’s worst-case temporal behaviors. We propose algorithms adapted from Vector Bin-Packing to enable the mapping at start time of transceivers to the multi-processor architecture, considering also the case where the processors are connected by a network on chip with resource reservation guarantees, in which case we also find routing and resource allocation on the network-on-chip. In our experiments, our resource allocation algorithms can keep 95% of the system resources occupied, while suffering from an allocation failure rate of less than 5%. An implementation of the framework was carried out on a prototype board. We present performance and memory utilization figures for this implementation, as they provide insights into the costs of adopting our approach. It turns out that the scheduling and synchronization overhead for an unoptimized implementation with no hardware support for synchronization of the framework is 16.3% of the cycle budget for a WLAN receiver on an EVP processor at 320 MHz. However, this overhead is less than 1% for mobile standards such as TDS-CDMA or LTE, which have lower rates, and thus larger cycle budgets. Considering that clock speeds will increase and that the synchronization primitives can be optimized to exploit the addressing modes available in the EVP, these results are very promising

    Ordonnancement hybride des applications flots de données sur des systèmes embarqués multi-coeurs

    Get PDF
    Les systèmes embarqués sont de plus en plus présents dans l'industrie comme dans la vie quotidienne. Une grande partie de ces systèmes comprend des applications effectuant du traitement intensif des données: elles utilisent de nombreux filtres numériques, où les opérations sur les données sont répétitives et ont un contrôle limité. Les graphes "flots de données", grâce à leur déterminisme fonctionnel inhérent, sont très répandus pour modéliser les systèmes embarqués connus sous le nom de "data-driven". L'ordonnancement statique et périodique des graphes flot de données a été largement étudié, surtout pour deux modèles particuliers: SDF et CSDF. Dans cette thèse, on s'intéresse plus particulièrement à l'ordonnancement périodique des graphes CSDF. Le problème consiste à identifier des séquences périodiques infinies d'actionnement des acteurs qui aboutissent à des exécutions complètes à buffers bornés. L'objectif est de pouvoir aborder ce problème sous des angles différents : maximisation de débit, minimisation de la latence et minimisation de la capacité des buffers. La plupart des travaux existants proposent des solutions pour l'optimisation du débit et négligent le problème d'optimisation de la latence et propose même dans certains cas des ordonnancements qui ont un impact négatif sur elle afin de conserver les propriétés de périodicité. On propose dans cette thèse un ordonnancement hybride, nommé Self-Timed Périodique (STP), qui peut conserver les propriétés d'un ordonnancement périodique et à la fois améliorer considérablement sa performance en terme de latence.One of the most important aspects of parallel computing is its close relation to the underlying hardware and programming models. In this PhD thesis, we take dataflow as the basic model of computation, as it fits the streaming application domain. Cyclo-Static Dataflow (CSDF) is particularly interesting because this variant is one of the most expressive dataflow models while still being analyzable at design time. Describing the system at higher levels of abstraction is not sufficient, e.g. dataflow have no direct means to optimize communication channels generally based on shared buffers. Therefore, we need to link the dataflow MoCs used for performance analysis of the programs, the real time task models used for timing analysis and the low-level model used to derive communication times. This thesis proposes a design flow that meets these challenges, while enabling features such as temporal isolation and taking into account other challenges such as predictability and ease of validation. To this end, we propose a new scheduling policy noted Self-Timed Periodic (STP), which is an execution model combining Self-Timed Scheduling (STS) with periodic scheduling. In STP scheduling, actors are no longer strictly periodic but self-timed assigned to periodic levels: the period of each actor under periodic scheduling is replaced by its worst-case execution time. Then, STP retains some of the performance and flexibility of self-timed schedule, in which execution times of actors need only be estimates, and at the same time makes use of the fact that with a periodic schedule we can derive a tight estimation of the required performance metrics

    From Dataflow Specification to Multiprocessor Partitioned Time-triggered Real-time Implementation *

    Get PDF
    International audienceOur objective is to facilitate the development of complex time-triggered systems by automating the allocation and scheduling steps. We show that full automation is possible while taking into account the elements of complexity needed by a complex embedded control system. More precisely, we consider deterministic functional specifications provided (as often in an industrial setting) by means of synchronous data-flow models with multiple modes and multiple relative periods. We first extend this functional model with an original real-time characterization that takes advantage of our time-triggered framework to provide a simpler representation of complex end-to-end flow requirements. We also extend our specifications with additional non-functional properties specifying partitioning, allocation , and preemptability constraints. Then, weprovide novel algorithms for the off-line scheduling of these extended specifications onto partitioned time-triggered architectures à la ARINC 653. The main originality of our work is that it takes into account at the same time multiple complexity elements: various types of non-functional properties (real-time, partitioning, allocation, preemptability) and functional specifications with conditional execution and multiple modes. Allocation of time slots/windows to partitions can be fullyor partially provided, or synthesized by our tool. Our algorithms allow the automatic allocation and scheduling onto multi-processor (distributed) sys-tems with a global time base, taking into account communication costs. We demonstrate our technique on a model of space flight software systemwith strong real-time determinism requirements

    Automated competitive analysis of real time scheduling with graph games

    Get PDF
    This paper is devoted to automatic competitive analysis of real-time scheduling algorithms for firm-deadline tasksets, where only completed tasks con- tribute some utility to the system. Given such a taskset T , the competitive ratio of an on-line scheduling algorithm A for T is the worst-case utility ratio of A over the utility achieved by a clairvoyant algorithm. We leverage the theory of quantitative graph games to address the competitive analysis and competitive synthesis problems. For the competitive analysis case, given any taskset T and any finite-memory on- line scheduling algorithm A , we show that the competitive ratio of A in T can be computed in polynomial time in the size of the state space of A . Our approach is flexible as it also provides ways to model meaningful constraints on the released task sequences that determine the competitive ratio. We provide an experimental study of many well-known on-line scheduling algorithms, which demonstrates the feasibility of our competitive analysis approach that effectively replaces human ingenuity (required Preliminary versions of this paper have appeared in Chatterjee et al. ( 2013 , 2014 ). B Andreas Pavlogiannis [email protected] Krishnendu Chatterjee [email protected] Alexander Kößler [email protected] Ulrich Schmid [email protected] 1 IST Austria (Institute of Science and Technology Austria), Am Campus 1, 3400 Klosterneuburg, Austria 2 Embedded Computing Systems Group, Vienna University of Technology, Treitlstrasse 3, 1040 Vienna, Austria 123 Real-Time Syst for finding worst-case scenarios) by computing power. For the competitive synthesis case, we are just given a taskset T , and the goal is to automatically synthesize an opti- mal on-line scheduling algorithm A , i.e., one that guarantees the largest competitive ratio possible for T . We show how the competitive synthesis problem can be reduced to a two-player graph game with partial information, and establish that the compu- tational complexity of solving this game is Np -complete. The competitive synthesis problem is hence in Np in the size of the state space of the non-deterministic labeled transition system encoding the taskset. Overall, the proposed framework assists in the selection of suitable scheduling algorithms for a given taskset, which is in fact the most common situation in real-time systems design

    실시간 임베디드 시스템을 위한 동적 행위 명세 및 설계 공간 탐색 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 8. 하순회.하나의 칩에 집적되는 프로세서의 개수가 많아지고, 많은 기능들이 통합됨에 따라, 연산양의 변화, 서비스의 품질, 예상치 못한 시스템 요소의 고장 등과 같은 다양한 요소들에 의해 시스템의 상태가 동적으로 변화하게 된다. 반면에, 본 논문에서 주된 관심사를 가지는 스마트 폰 장치에서 주로 사용되는 비디오, 그래픽 응용들의 경우, 계산 복잡도가 지속적으로 증가하고 있다. 따라서, 이렇게 동적으로 변하는 행위를 가지면서도 병렬성을 내제한 계산 집약적인 연산을 포함하는 복잡한 시스템을 구현하기 위해서는 체계적인 설계 방법론이 고도로 요구된다. 모델 기반 방법론은 병렬 임베디드 소프트웨어 개발을 위한 대표적인 방법 중 하나이다. 특히, 시스템 명세, 정적 성능 분석, 설계 공간 탐색, 그리고 자동 코드 생성까지의 모든 설계 단계를 지원하는 병렬 임베디드 소프트웨어 설계 환경으로서, HOPES 프레임워크가 제시되었다. 다른 설계 환경들과는 다르게, 이기종 멀티프로세서 아키텍처에서의 일반적인 수행 모델로서, 공통 중간 코드 (CIC) 라고 부르는 프로그래밍 플랫폼이라는 새로운 개념을 소개하였다. CIC 태스크 모델은 프로세스 네트워크 모델에 기반하고 있지만, SDF 모델로 구체화될 수 있기 때문에, 병렬 처리뿐만 아니라 정적 분석이 용이하다는 장점을 가진다. 하지만, SDF 모델은 응용의 동적인 행위를 명세할 수 없다는 표현상의 제약을 가진다. 이러한 제약을 극복하고, 시스템의 동적 행위를 응용 외부와 내부로 구분하여 명세하기 위해, 본 논문에서는 데이터 플로우와 유한상태기 (FSM) 모델에 기반하여 확장된 CIC 태스크 모델을 제안한다. 상위 수준에서는, 각 응용은 데이터 플로우 태스크로 명세 되며, 동적 행위는 응용들의 수행을 감독하는 제어 태스크로 모델 된다. 데이터 플로우 태스크 내부는, 유한상태기 기반의 SADF 모델과 유사한 형태로 동적 행위가 명세 된다SDF 태스크는 복수개의 행위를 가질 수 있으며, 모드 전환기 (MTM)이라고 불리는 유한 상태기의 테이블 형태의 명세를 통해 SDF 그래프의 모드 전환 규칙을 명세 한다. 이를 MTM-SDF 그래프라고 부르며, 복수 모드 데이터 플로우 모델 중 하나라 구분된다. 응용은 유한한 행위 (또는 모드)를 가지며, 각 행위 (모드)는 SDF 그래프로 표현되는 것을 가정한다. 이를 통해 다양한 프로세서 개수에 대해 단위시간당 처리량을 최대화하는 컴파일-시간 스케줄링을 수행하고, 스케줄 결과를 저장할 수 있도록 한다. 또한, 복수 모드 데이터 플로우 그래프를 위한 멀티프로세서 스케줄링 기법을 제시한다. 복수 모드 데이터 플로우 그래프를 위한 몇몇 스케줄링 기법들이 존재하지만, 모드 사이에 태스크 이주를 허용한 기법들은 존재하지 않는다. 하지만 태스크 이주를 허용하게 되면 자원 요구량을 줄일 수 있다는 발견을 통해, 본 논문에서는 모드 사이의 태스크 이주를 허용하는 복수 모드 데이터 플로우 그래프를 위한 멀티프로세서 스케줄링 기법을 제안한다. 유전 알고리즘에 기반하여, 제안하는 기법은 자원 요구량을 최소화하기 위해 각 모드에 해당하는 모든 SDF 그래프를 동시에 스케줄 한다. 주어진 단위 시간당 처리량 제약을 만족시키기 위해, 제안하는 기법은 각 모드 별로 실제 처리량 요구량을 계산하며, 처리량의 불규칙성을 완화하기 위한 출력 버퍼의 크기를 계산한다. 명세된 태스크 그래프와 스케줄 결과로부터, HOPES 프레임워크는 대상 아키텍처를 위한 자동 코드 생성을 지원한다. 이를 위해 자동 코드 생성기는 CIC 태스크 모델의 확장된 특징들을 지원하도록 확장되었다. 응용 수준에서는 MTM-SDF 그래프를 주어진 정적 스케줄링 결과를 따르는 멀티프로세서 코드를 생성하도록 확장되었다. 또한, 네 가지 서로 다른 스케줄링 정책 (fully-static, self-timed, static-assignment, fully-dynamic)에 대한 멀티프로세서 코드 생성을 지원한다. 시스템 수준에서는 지원하는 시스템 요청 API에 대한 실제 구현 코드를 생성하며, 정적 스케줄 결과와 태스크들의 제어 가능한 속성들에 대한 자료 구조 코드를 생성한다. 복수 모드 멀티미디어 터미널 예제를 통한 기초적인 실험들을 통해, 제안하는 방법론의 타당성을 보인다.As the number of processors in a chip increases, and more functions are integrated, the system status will change dynamically due to various factors such as the workload variation, QoS requirement, and unexpected component failure. On the other hand, computation-complexity of user applications is also steadily increasingvideo and graphics applications are two major driving forces in smart mobile devices, which define the main application domain of interest in this dissertation. So, a systematic design methodology is highly required to implement such complex systems which contain dynamically changed behavior as well as computation-intensive workload that can be parallelized. A model-based approach is one of representative approaches for parallel embedded software development. Especially, HOPES framework is proposed which is a design environment for parallel embedded software supporting the overall design steps: system specification, performance estimation, design space exploration, and automatic code generation. Distinguished from other design environments, it introduces a novel concept of programming platform, called CIC (Common Intermediate Code) that can be understood as a generic execution model of heterogeneous multiprocessor architecture. The CIC task model is based on a process network model, but it can be refined to the SDF (Synchronous Data Flow) model, since it has a very desirable features for static analyzability as well as parallel processing. However, the SDF model has a typical weakness of expression capability, especially for the system-level specification and dynamically changed behavior of an application. To overcome this weakness, in this dissertation, we propose an extended CIC task model based on dataflow and FSM models to specify the dynamic behavior of the system distinguishing inter- and intra-application dynamism. At the top-level, each application is specified by a dataflow task and the dynamic behavior is modeled as a control task that supervises the execution of applications. Inside a dataflow task, it specifies the dynamic behavior using a similar way as FSM-based SADFan SDF task may have multiple behaviors and a tabular specification of an FSM, called MTM (Mode Transition Machine), describes the mode transition rules for the SDF graph. We call it to MTM-SDF model which is classified as multi-mode dataflow models in the dissertation. It assumes that an application has a finite number of behaviors (or modes) and each behavior (mode) is represented by an SDF graph. It enables us to perform compile-time scheduling of each graph to maximize the throughput varying the number of allocated processors, and store the scheduling information. Also, a multiprocessor scheduling technique is proposed for a multi-mode dataflow graph. While there exist several scheduling techniques for multi-mode dataflow models, no one allows task migration between modes. By observing that the resource requirement can be additionally reduced if task migration is allowed, we propose a multiprocessor scheduling technique of a multi-mode dataflow graph considering task migration between modes. Based on a genetic algorithm, the proposed technique schedules all SDF graphs in all modes simultaneously to minimize the resource requirement. To satisfy the throughput constraint, the proposed technique calculates the actual throughput requirement of each mode and the output buffer size for tolerating throughput jitter. For the specified task graph and scheduling results, the CIC translator generates parallelized code for the target architecture. Therefore the CIC translator is extended to support extended features of the CIC task model. In application-level, it is extended to support multiprocessor code generation for an MTM-SDF graph considering the given static scheduling results. Also, multiprocessor code generation of four different scheduling policies are supported for an MTM-SDF graph: fully-static, self-timed, static-assignment, and fully-dynamic. In system-level, the CIC translator is extended to support code generation for implementation of system request APIs and data structures for the static scheduling results and configurable task parameters. Through preliminary experiments with a multi-mode multimedia terminal example, the viability of the proposed methodology is verified.Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Contribution 7 1.3 Dissertation organization 9 Chapter 2 Background 10 2.1 Related work 10 2.1.1 Compiler-based approach 10 2.1.2 Language-based approach 11 2.1.3 Model-based approach 15 2.2 HOPES framework 19 2.3 Common Intermediate Code (CIC) Model 21 Chapter 3 Dynamic Behavior Specification 26 3.1 Problem definition 26 3.1.1 System-level dynamic behavior 26 3.1.2 Application-level dynamic behavior 27 3.2 Related work 28 3.3 Motivational example 31 3.4 Control task specification for system-level dynamism 33 3.4.1 Internal specification 33 3.4.2 Action scripts 38 3.5 MTM-SDF specification for application-level dynamism 44 3.5.1 MTM specification 44 3.5.2 Task graph specification 45 3.5.3 Execution semantic of an MTM-SDF graph 46 Chapter 4 Multiprocessor Scheduling of an Multi-mode Dataflow Graph 50 4.1 Related work 51 4.2 Motivational example 56 4.2.1 Throughput requirement calculation considering mode transition delay 56 4.2.2 Task migration between mode transition 58 4.3 Problem definition 61 4.4 Throughput requirement analysis 65 4.4.1 Mode transition delay 66 4.4.2 Arrival curves of the output buffer 70 4.4.3 Buffer size determination 71 4.4.4 Throughput requirement analysis 73 4.5 Proposed MMDF scheduling framework 75 4.5.1 Optimization problem 75 4.5.2 GA configuration 76 4.5.3 Fitness function 78 4.5.4 Local optimization technique 79 4.6 Experimental results 81 4.6.1 MMDF scheduling technique 83 4.6.2 Scalability of the Proposed Framework 88 Chapter 5 Multiprocessor Code Generation for the Extended CIC Model 89 5.1 CIC translator 89 5.2 Code generation for application-level dynamism 91 5.2.1 Function call-style code generation (fully-static, self-timed) 94 5.2.2 Thread-style code generation (static-assignment, fully-dynamic) 98 5.3 Code generation for system-level dynamism 101 5.4 Experimental results 105 Chapter 6 Conclusion and Future Work 107 Bibliography 109 초록 125Docto

    SCALABLE TECHNIQUES FOR SCHEDULING AND MAPPING DSP APPLICATIONS ONTO EMBEDDED MULTIPROCESSOR PLATFORMS

    Get PDF
    A variety of multiprocessor architectures has proliferated even for off-the-shelf computing platforms. To make use of these platforms, traditional implementation frameworks focus on implementing Digital Signal Processing (DSP) applications using special platform features to achieve high performance. However, due to the fast evolution of the underlying architectures, solution redevelopment is error prone and re-usability of existing solutions and libraries is limited. In this thesis, we facilitate an efficient migration of DSP systems to multiprocessor platforms while systematically leveraging previous investment in optimized library kernels using dataflow design frameworks. We make these library elements, which are typically tailored to specialized architectures, more amenable to extensive analysis and optimization using an efficient and systematic process. In this thesis we provide techniques to allow such migration through four basic contributions: 1. We propose and develop a framework to explore efficient utilization of Single Instruction Multiple Data (SIMD) cores and accelerators available in heterogeneous multiprocessor platforms consisting of General Purpose Processors (GPPs) and Graphics Processing Units (GPUs). We also propose new scheduling techniques by applying extensive block processing in conjunction with appropriate task mapping and task ordering methods that match efficiently with the underlying architecture. The approach gives the developer the ability to prototype a GPU-accelerated application and explore its design space efficiently and effectively. 2. We introduce the concept of Partial Expansion Graphs (PEGs) as an implementation model and associated class of scheduling strategies. PEGs are designed to help realize DSP systems in terms of forms and granularities of parallelism that are well matched to the given applications and targeted platforms. PEGs also facilitate derivation of both static and dynamic scheduling techniques, depending on the amount of variability in task execution times and other operating conditions. We show how to implement efficient PEG-based scheduling methods using real time operating systems, and to re-use pre-optimized libraries of DSP components within such implementations. 3. We develop new algorithms for scheduling and mapping systems implemented using PEGs. Collectively, these algorithms operate in three steps. First, the amount of data parallelism in the application graph is tuned systematically over many iterations to profit from the available cores in the target platform. Then a mapping algorithm that uses graph analysis is developed to distribute data and task parallel instances over different cores while trying to balance the load of all processing units to make use of pipeline parallelism. Finally, we use a novel technique for performance evaluation by implementing the scheduler and a customizable solution on the programmable platform. This allows accurate fitness functions to be measured and used to drive runtime adaptation of schedules. 4. In addition to providing scheduling techniques for the mentioned applications and platforms, we also show how to integrate the resulting solution in the underlying environment. This is achieved by leveraging existing libraries and applying the GPP-GPU scheduling framework to augment a popular existing Software Defined Radio (SDR) development environment -- GNU Radio -- with a dataflow foundation and a stand-alone GPU-accelerated library. We also show how to realize the PEG model on real time operating system libraries, such as the Texas Instruments DSP/BIOS. A code generator that accepts a manual system designer solution as well as automatically configured solutions is provided to complete the design flow starting from application model to running system

    Scheduling Problems

    Get PDF
    Scheduling is defined as the process of assigning operations to resources over time to optimize a criterion. Problems with scheduling comprise both a set of resources and a set of a consumers. As such, managing scheduling problems involves managing the use of resources by several consumers. This book presents some new applications and trends related to task and data scheduling. In particular, chapters focus on data science, big data, high-performance computing, and Cloud computing environments. In addition, this book presents novel algorithms and literature reviews that will guide current and new researchers who work with load balancing, scheduling, and allocation problems
    corecore