8 research outputs found

    Modeling & analysis of an LTE-advanced receiver using mode-controlled dataflow

    No full text
    Current multi-functional embedded systems such as smartphones and tablets support multiple 2G/3G/4G radio standards including Long Term Evolution (LTE) and LTE-Advanced. LTE-Advanced is the latest industry standard that improves upon LTE by introducing several feature-rich and complex improvements. Moreover, both LTE and LTE-Advanced have real-time requirements. LTE and LTE-Advanced receivers are typically scheduled on a heterogeneous multi-core processor to satisfy real-time and low-power requirements.\u3cbr/\u3e\u3cbr/\u3eManual simulation-based real-time analysis of such applications is infeasible. Dataflow can be used for real-time analysis. Static dataflow allows a rich set of analysis techniques to support real-time scheduling, but is too restrictive to accurately and conveniently model the dynamic data-dependent behavior for many practical applications, including LTE-Advanced. Dynamic dataflow allows modeling of such applications, but in general does not support rigorous real-time analysis.\u3cbr/\u3e\u3cbr/\u3eMode-controlled Dataflow (MCDF) is a restricted form of dynamic dataflow that captures realistic features in such applications and allows rigorous timing analysis. We stepwise refine and develop complete and fine-grained MCDF models of an LTE-Advanced receiver that include two key features: 1) carrier aggregation and 2) enhanced physical downlink control channel processing. We schedule the MCDF models on an industrial platform to benchmark them against (static) Single-rate Dataflow (SRDF) using existing buffer allocation techniques to demonstrate that these models are analyzable and practically applicable. Moreover, we also develop latency analysis techniques for single-rate and mode-controlled dataflow. For our LTE-Advanced receiver, relative to SRDF models, MCDF models offer 1) up to 15% smaller memory consumption, and 2) up to 1.6% smaller LTE-Advanced sub-frame processing latency

    Buffer allocation for real-time streaming applications running on heterogeneous multi-processors without back-pressure

    No full text
    The goal of buffer allocation for real-time streaming applications is to minimize total memory consumption, while reserving sufficient space for each data production, without overwriting any live data and guaranteeing the satisfaction of real-time constraints. Previous research has mostly focused on buffer allocation for systems with back-pressure. This paper addresses the problem of buffer allocation for systems without back-pressure. Since systems without back-pressure lack blocking behavior at the side of the producer, buffer allocation requires both best- and worst-case timing analysis.\u3cbr/\u3e\u3cbr/\u3eOur contributions are (1) extension of the available dataflow techniques with best-case analysis; (2) the closest common dominator-based and closest common predecessor-based lifetime analysis techniques; (3) techniques to model the initialization behavior and enable token reuse.\u3cbr/\u3e\u3cbr/\u3eOur benchmark set includes an MP3 decoder, a WLAN receiver, an LTE receiver and an LTE-Advanced receiver. We consider two key features of LTE-Advanced: (1) carrier aggregation and (2) EPDCCH processing. Through our experiments, we demonstrate that our techniques are effective in handling the complexities of real-world applications. For the LTE-Advanced receiver case study, our techniques enable us to compare buffer allocation required for different scheduling policies with effective impact on architectural decisions. A key insight in this comparison is that our improved techniques show a different scheduling policy to be superior in terms of buffer sizes compared to our previous technique. This dramatically changes the trade-off among different scheduling policies for LTE-Advanced receiver

    Buffer allocation for dynamic real-time streaming applications running on a multi-processor without back-pressure

    No full text
    Buffer allocation for real-time streaming applications, modeled as dataflow graphs, minimizes the total memory consumption while reserving sufficient space for each data production without overwriting any live data and guaranteeing the satisfaction of real-time constraints. We focus on the problem of buffer allocation for systems without back-pressure. Since systems without back-pressure lack blocking behavior at the side of the producer, buffer allocation requires both best- and worst-case timing analysis. Moreover, the dynamic (data-dependent) behavior in these applications makes buffer allocation challenging from the best- and worst-case- timing analysis perspective. We argue that static dataflow cannot conveniently express the dynamic behavior of these applications, leading to overallocation of memory resources. Mode-controlled Dataflow (MCDF) is a restricted form of dynamic dataflow that allows mode switching at runtime and static analysis of real-time constraints. In this paper, we address the problem of buffer allocation for MCDF graphs scheduled on systems without back-pressure. We consider practically relevant applications that can be modeled in MCDF using recurrent-choice mode sequence that consists of the mode sequences of equal length; it provides tractable analysis. Our contribution is a buffer allocation algorithm that achieves up to 36% reduction in total memory consumption compared to the current state-of-the-art for an LTE and an LTE Advanced receiver use cases

    Mode-controlled data-flow modeling of real-time memory controllers

    No full text
    \u3cp\u3eSDRAM is a shared resource in modern multi-core platforms executing multiple real-time (RT) streaming applications. It is crucial to analyze the minimum guaranteed SDRAM bandwidth to ensure that the requirements of the RT streaming applications are always satisfied. However, deriving the worstcase bandwidth (WCBW) is challenging because of the diverse memory traffic with variable transaction sizes. In fact, existing RT memory controllers either do not efficiently support variable transaction sizes or do not provide an analysis to tightly bound WCBW in their presence. We propose a new mode-controlled data-flow (MCDF) model to capture the command scheduling dependencies of memory transactions with variable sizes. The WCBW can be obtained by employing an existing tool to automatically analyze our MCDF model rather than using existing static analysis techniques, which in contrast to our model are hard to extend to cover different RT memory controllers. Moreover, the MCDF analysis can exploit static information about known transaction sequences provided by the applications or by the memory arbiter. Experimental results show that 77% improvement of WCBW can be achieved compared to the case without known transaction sequences. In addition, the results demonstrate that the proposed MCDF model outperforms state-of-the-art analysis approaches and improves the WCBW by 22% without known transaction sequences.\u3c/p\u3

    Cooperative automated maneuvering at the 2016 Grand Cooperative Driving Challenge

    No full text
    Cooperative adaptive cruise control and platooning are well-known applications in the field of cooperative automated driving. However, extension towards maneuvering is desired to accommodate common highway maneuvers, such as merging, and to enable urban applications. To this end, a layered control architecture is adopted. In this architecture, the tactical layer hosts the interaction protocols, describing the wireless information exchange to initiate the vehicle maneuvers, supported by a novel wireless message set, whereas the operational layer involves the vehicle controllers to realize the desired maneuvers. This hierarchical approach was the basis for the Grand Cooperative Driving Challenge (GCDC), which was held in May 2016 in The Netherlands. The GCDC provided the opportunity for participating teams to cooperatively execute a highway lane-reduction scenario and an urban intersection-crossing scenario. The GCDC was set up as a competition and, hence, also involving assessment of the teams' individual performance in a cooperative setting. As a result, the hierarchical architecture proved to be a viable approach, whereas the GCDC appeared to be an effective instrument to advance the field of cooperative automated driving.\u3cbr/\u3
    corecore