17 research outputs found

    Modeling & analysis of an LTE-advanced receiver using mode-controlled dataflow

    Get PDF
    Current multi-functional embedded systems such as smartphones and tablets support multiple 2G/3G/4G radio standards including Long Term Evolution (LTE) and LTE-Advanced. LTE-Advanced is the latest industry standard that improves upon LTE by introducing several feature-rich and complex improvements. Moreover, both LTE and LTE-Advanced have real-time requirements. LTE and LTE-Advanced receivers are typically scheduled on a heterogeneous multi-core processor to satisfy real-time and low-power requirements. Manual simulation-based real-time analysis of such applications is infeasible. Dataflow can be used for real-time analysis. Static dataflow allows a rich set of analysis techniques to support real-time scheduling, but is too restrictive to accurately and conveniently model the dynamic data-dependent behavior for many practical applications, including LTE-Advanced. Dynamic dataflow allows modeling of such applications, but in general does not support rigorous real-time analysis. Mode-controlled Dataflow (MCDF) is a restricted form of dynamic dataflow that captures realistic features in such applications and allows rigorous timing analysis. We stepwise refine and develop complete and fine-grained MCDF models of an LTE-Advanced receiver that include two key features: 1) carrier aggregation and 2) enhanced physical downlink control channel processing. We schedule the MCDF models on an industrial platform to benchmark them against (static) Single-rate Dataflow (SRDF) using existing buffer allocation techniques to demonstrate that these models are analyzable and practically applicable. Moreover, we also develop latency analysis techniques for single-rate and mode-controlled dataflow. For our LTE-Advanced receiver, relative to SRDF models, MCDF models offer 1) up to 15% smaller memory consumption, and 2) up to 1.6% smaller LTE-Advanced sub-frame processing latency

    Mode-controlled dataflow based modeling & analysis of a 4G-LTE receiver

    No full text
    Today's smartphones and tablets contain multiple cellular modems to support 2G/3G/4G standards, including Long Term Evolution (LTE). They run on complex multi-processor hardware platforms and have to meet hard real-time constraints. Dataflow modeling can be used to design an LTE receiver. Static dataflow allows a rich set of analysis techniques, but is too restrictive to model the dynamic behavior in many realistic applications, including LTE receivers. Dynamic dataflow allows modeling of many realistic applications, but does not support rigorous temporal analysis. Mode-Controlled Dataflow (MCDF) is a restricted form of dynamic dataflow, and allows the same analysis techniques as static dataflow, in principle. We prove that MCDF is sufficiently expressive to handle the dynamic behavior of a realistic LTE receiver, by systematically and stepwise developing a complete MCDF model for an LTE receiver

    Buffer allocation for real-time streaming on a multi-processor without back-pressure

    Get PDF
    The goal of buffer allocation for real-time streaming applications, modeled as dataflow graphs, is to minimize total memory consumption while reserving sufficient space for each production without overwriting any live tokens and guaranteeing the satisfaction of real-time constraints. We present a buffer allocation solution for dataflow graphs scheduled on a system without back-pressure. Our contributions are 1)We extend the available dataflow techniques by applying best-case analysis. 2) We introduce dominator based relative life-time analysis. For our benchmark set, it exhibits up to 12% savings on memory consumption compared to traditional absolute life-time analysis. 3)We investigate the effect of variation in execution times on the buffer sizes for systems without back-pressure. It turns out that reducing the variation in execution times reduces the buffer sizes. 4)We compare the buffer allocation techniques for systems with and without back-pressure. For our benchmark set, we show that the system with backpressure reduces the total memory consumption by as much as 28 % compared to the system without back-pressure. Our benchmark set includes wireless communications and multimedia applications

    Mode-controlled dataflow based buffer allocation for real-time streaming applications running on a multi-processor without back-pressure

    Get PDF
    Trickle is a polite gossip algorithm for managing communication traffic. It is of particular interest in low-power wireless networks for reducing the amount of control traffic, as in routing protocols (RPL), or reducing network congestion, as in multicast protocols (MPL). Trickle is used at the network or application level, and relies on up-to-date information on the activity of neighbors. This makes it vulnerable to interference from the media access control layer, which we explore in this paper. We present several scenarios how the MAC layer in low-power radios violates Trickle timing. As a case study, we analyze the impact of CSMA/CA with ContikiMAC on Trickle's performance. Additionally, we propose a solution called Cleansing that resolves these issues

    Buffer allocation for real-time streaming applications running on heterogeneous multi-processors without back-pressure

    No full text
    The goal of buffer allocation for real-time streaming applications is to minimize total memory consumption, while reserving sufficient space for each data production, without overwriting any live data and guaranteeing the satisfaction of real-time constraints. Previous research has mostly focused on buffer allocation for systems with back-pressure. This paper addresses the problem of buffer allocation for systems without back-pressure. Since systems without back-pressure lack blocking behavior at the side of the producer, buffer allocation requires both best- and worst-case timing analysis. Our contributions are (1) extension of the available dataflow techniques with best-case analysis; (2) the closest common dominator-based and closest common predecessor-based lifetime analysis techniques; (3) techniques to model the initialization behavior and enable token reuse. Our benchmark set includes an MP3 decoder, a WLAN receiver, an LTE receiver and an LTE-Advanced receiver. We consider two key features of LTE-Advanced: (1) carrier aggregation and (2) EPDCCH processing. Through our experiments, we demonstrate that our techniques are effective in handling the complexities of real-world applications. For the LTE-Advanced receiver case study, our techniques enable us to compare buffer allocation required for different scheduling policies with effective impact on architectural decisions. A key insight in this comparison is that our improved techniques show a different scheduling policy to be superior in terms of buffer sizes compared to our previous technique. This dramatically changes the trade-off among different scheduling policies for LTE-Advanced receiver

    Buffer allocation for dynamic real-time streaming applications running on a multi-processor without back-pressure

    No full text
    Buffer allocation for real-time streaming applications, modeled as dataflow graphs, minimizes the total memory consumption while reserving sufficient space for each data production without overwriting any live data and guaranteeing the satisfaction of real-time constraints. We focus on the problem of buffer allocation for systems without back-pressure. Since systems without back-pressure lack blocking behavior at the side of the producer, buffer allocation requires both best- and worst-case timing analysis. Moreover, the dynamic (data-dependent) behavior in these applications makes buffer allocation challenging from the best- and worst-case- timing analysis perspective. We argue that static dataflow cannot conveniently express the dynamic behavior of these applications, leading to overallocation of memory resources. Mode-controlled Dataflow (MCDF) is a restricted form of dynamic dataflow that allows mode switching at runtime and static analysis of real-time constraints. In this paper, we address the problem of buffer allocation for MCDF graphs scheduled on systems without back-pressure. We consider practically relevant applications that can be modeled in MCDF using recurrent-choice mode sequence that consists of the mode sequences of equal length; it provides tractable analysis. Our contribution is a buffer allocation algorithm that achieves up to 36% reduction in total memory consumption compared to the current state-of-the-art for an LTE and an LTE Advanced receiver use cases
    corecore