6,658 research outputs found

    Applying Dataflow Analysis to Dimension Buffers for Guaranteed Performance in Networks on Chip

    Get PDF
    A Network on Chip (NoC) with end-to-end flow control is modelled by a cyclo-static dataflow graph. Using the proposed model together with state-of-the-art dataflow analysis algorithms, we size the buffers in the network interfaces. We show, for a range of NoC designs, that buffer sizes are determined with a run time comparable to existing analytical methods, and results comparable to exhaustive simulation

    Architecture Design Space Exploration for Streaming Applications Through Timing Analysis

    Get PDF
    In this paper we compare the maximum achievable throughput of different memory organisations of the processing elements that constitute a multiprocessor system on chip. This is done by modelling the mapping of a task with input and output channels on a processing element as a homogeneous synchronous dataflow graph, and use maximum cycle mean analysis to derive the throughput. In a HiperLAN2 case study we show how these techniques can be used to derive the required clock frequency and communication latencies in order to meet the application's throughput requirement on a multiprocessor system on chip that has one of the investigated memory organisations

    A Framework for Agile Development of Component-Based Applications

    Get PDF
    Agile development processes and component-based software architectures are two software engineering approaches that contribute to enable the rapid building and evolution of applications. Nevertheless, few approaches have proposed a framework to combine agile and component-based development, allowing an application to be tested throughout the entire development cycle. To address this problematic, we have built CALICO, a model-based framework that allows applications to be safely developed in an iterative and incremental manner. The CALICO approach relies on the synchronization of a model view, which specifies the application properties, and a runtime view, which contains the application in its execution context. Tests on the application specifications that require values only known at runtime, are automatically integrated by CALICO into the running application, and the captured needed values are reified at execution time to resume the tests and inform the architect of potential problems. Any modification at the model level that does not introduce new errors is automatically propagated to the running system, allowing the safe evolution of the application. In this paper, we illustrate the CALICO development process with a concrete example and provide information on the current implementation of our framework

    Blazes: Coordination Analysis for Distributed Programs

    Full text link
    Distributed consistency is perhaps the most discussed topic in distributed systems today. Coordination protocols can ensure consistency, but in practice they cause undesirable performance unless used judiciously. Scalable distributed architectures avoid coordination whenever possible, but under-coordinated systems can exhibit behavioral anomalies under fault, which are often extremely difficult to debug. This raises significant challenges for distributed system architects and developers. In this paper we present Blazes, a cross-platform program analysis framework that (a) identifies program locations that require coordination to ensure consistent executions, and (b) automatically synthesizes application-specific coordination code that can significantly outperform general-purpose techniques. We present two case studies, one using annotated programs in the Twitter Storm system, and another using the Bloom declarative language.Comment: Updated to include additional materials from the original technical report: derivation rules, output stream label

    Buffer Capacity Computation for Throughput Constrained Streaming Applications with Data-Dependent Inter-Task Communication

    Get PDF
    Streaming applications are often implemented as task graphs, in which data is communicated from task to task over buffers. Currently, techniques exist to compute buffer capacities that guarantee satisfaction of the throughput constraint if the amount of data produced and consumed by the tasks is known at design-time. However, applications such as audio and video decoders have tasks that produce and consume an amount of data that depends on the decoded stream. This paper introduces a dataflow model that allows for data-dependent communication, together with an algorithm that computes buffer capacities that guarantee satisfaction of a throughput constraint. The applicability of this algorithm is demonstrated by computing buffer capacities for an H.263 video decoder
    corecore