12 research outputs found

    Buffer-aware Worst Case Timing Analysis of Wormhole Network On Chip

    Get PDF
    A buffer-aware worst-case timing analysis of wormhole NoC is proposed in this paper to integrate the impact of buffer size on the different dependencies relationship between flows, i.e. direct and indirect blocking flows, and consequently the timing performance. First, more accurate definitions of direct and indirect blocking flows sets have been introduced to take into account the buffer size impact. Then, the modeling and worst-case timing analysis of wormhole NoC have been detailed, based on Network Calculus formalism and the newly defined blocking flows sets. This introduced approach has been illustrated in the case of a realistic NoC case study to show the trade off between latency and buffer size. The comparative analysis of our proposed Buffer-aware timing analysis with conventional approaches is conducted and noticeable enhancements in terms of maximum latency have been proved

    Power-performance assessment of different DVFS control policies in NoCs

    Get PDF
    We analyze the power-delay trade-off in a Network-on-Chip (NoC) under three Dynamic Voltage and Frequency Scaling (DVFS) policies. The first rate-based policy sets frequency and voltage of the NoC to the minimum value that allows to sustain the injection rate without reaching saturation. The second queue-based policy uses a feedback-loop approach to throttle the NoC frequency and voltage such that the average backlog of the injection queues tracks a target value. The third delay-based policy uses a closed- loop strategy that targets a given NoC end-to-end average delay. We first show that, despite the different mechanism and implementation, both rate-based and queue-based policies obtain very similar results in terms of power and delay, and we propose a theoretical interpretation of this similarity. Then, we show that delay-based policy generally offers a better power-delay trade-off. We obtained our results with an extensive set of experiments on synthetic traffic, as well as multimedia, communications and PARSEC benchmarks. For all the experiments, we report both cycle-accurate simulation results for the analysis of NoC delay and accurate power results obtained targeting a standard-cell library in an advanced 28-nm FDSOI CMOS technology

    Computing Accurate Performance Bounds for Best Effort Networks-on-Chip

    Get PDF
    Real-time (RT) communication support is a critical requirement for many complex embedded applications which are currently targeted to Network-on-chip (NoC) platforms. In this paper, we present novel methods to efficiently calculate worst- case bandwidth and latency bounds for RT traffic streams on wormhole-switched NoCs with arbitrary topology. The proposed methods apply to best-effort NoC architectures, with no extra hardware dedicated to RT traffic support. By applying our methods to several realistic NoC designs, we show substantial improvements (more than 30% in bandwidth and 50% in latency, on average) in bound tightness with respect to existing approaches

    A verilog-hdl implementation of virtual channels in a network-on-chip router

    Get PDF
    As the feature size is continuously decreasing and integration density is increasing, interconnections have become a dominating factor in determining the overall quality of a chip. Due to the limited scalability of system bus, it cannot meet the requirement of current System-on-Chip (SoC) implementations where only a limited number of functional units can be supported. Long global wires also cause many design problems, such as routing congestion, noise coupling, and difficult timing closure. Network-on-Chip (NoC) architectures have been proposed to be an alternative to solve the above problems by using a packet-based communication network. The processing elements (PEs) communicate with each other by exchanging messages over the network and these messages go through buffers in each router. Buffers are one of the major resource used by the routers in virtual channel flow control. In this thesis, we analyze two kinds of buffer allocation approaches, static and dynamic buffer allocations. These approaches aim to increase throughput and minimize latency by means of virtual channel flow control. In statically allocated buffer architecture, size and organization are design time decisions and thus, do not perform optimally for all traffic conditions. In addition, statically allocated virtual channel consumes a waste of area and significant leakage power. However, dynamic buffer allocation scheme claims that buffer utilization can be increased using dynamic virtual channels. Dynamic virtual channel regulator (ViChaR), have been proposed to use centralized buffer architecture which dynamically allocates virtual channels and buffer slots in real-time depending on traffic conditions. This ViChaR’s dynamic buffer management scheme increases buffer utilization, but it also increases design complexity. In this research, we reexamine performance, power consumption, and area of ViChaR’s buffer architecture through implementation. We implement a generic router and a ViChaR architecture using Verilog-HDL. These RTL codes are verified by dynamic simulation, and synthesized by Design Compiler to get area and power consumption. In addition, we get latency through Static Timing Analysis. The results show that a ViChaR’s dynamic buffer management scheme increases the latency and power consumption significantly even though it could increase buffer utilization. Therefore, we need a novel design to achieve high buffer utilization without a loss

    Exploring resource/performance trade-offs for streaming applications on embedded multiprocessors

    Get PDF
    Embedded system design is challenged by the gap between the ever-increasing customer demands and the limited resource budgets. The tough competition demands ever-shortening time-to-market and product lifecycles. To solve or, at least to alleviate, the aforementioned issues, designers and manufacturers need model-based quantitative analysis techniques for early design-space exploration to study trade-offs of different implementation candidates. Moreover, modern embedded applications, especially the streaming applications addressed in this thesis, face more and more dynamic input contents, and the platforms that they are running on are more flexible and allow runtime configuration. Quantitative analysis techniques for embedded system design have to be able to handle such dynamic adaptable systems. This thesis has the following contributions: - A resource-aware extension to the Synchronous Dataflow (SDF) model of computation. - Trade-off analysis techniques, both in the time-domain and in the iterationdomain (i.e., on an SDF iteration basis), with support for resource sharing. - Bottleneck-driven design-space exploration techniques for resource-aware SDF. - A game-theoretic approach to controller synthesis, guaranteeing performance under dynamic input. As a first contribution, we propose a new model, as an extension of static synchronous dataflow graphs (SDF) that allows the explicit modeling of resources with consistency checking. The model is called resource-aware SDF (RASDF). The extension enables us to investigate resource sharing and to explore different scheduling options (ways to allocate the resources to the different tasks) using state-space exploration techniques. Consistent SDF and RASDF graphs have the property that an execution occurs in so-called iterations. An iteration typically corresponds to the processing of a meaningful piece of data, and it returns the graph to its initial state. On multiprocessor platforms, iterations may be executed in a pipelined fashion, which makes performance analysis challenging. As the second contribution, this thesis develops trade-off analysis techniques for RASDF, both in the time-domain and in the iteration-domain (i.e., on an SDF iteration basis), to dimension resources on platforms. The time-domain analysis allows interleaving of different iterations, but the size of the explored state space grows quickly. The iteration-based technique trades the potential of interleaving of iterations for a compact size of the iteration state space. An efficient bottleneck-driven designspace exploration technique for streaming applications, the third main contribution in this thesis, is derived from analysis of the critical cycle of the state space, to reveal bottleneck resources that are limiting the throughput. All techniques are based on state-based exploration. They enable system designers to tailor their platform to the required applications, based on their own specific performance requirements. Pruning techniques for efficient exploration of the state space have been developed. Pareto dominance in terms of performance and resource usage is used for exact pruning, and approximation techniques are used for heuristic pruning. Finally, the thesis investigates dynamic scheduling techniques to respond to dynamic changes in input streams. The fourth contribution in this thesis is a game-theoretic approach to tackle controller synthesis to select the appropriate schedules in response to dynamic inputs from the environment. The approach transforms the explored iteration state space of a scenario- and resource-aware SDF (SARA SDF) graph to a bipartite game graph, and maps the controller synthesis problem to the problem of finding a winning positional strategy in a classical mean payoff game. A winning strategy of the game can be used to synthesize the controller of schedules for the system that is guaranteed to satisfy the throughput requirement given by the designer

    Evaluation of cloud computing modelling tools: simulators and predictive models

    Get PDF
    Experimenting with novel algorithms and configurations for the automatic management of Cloud Computing infrastructures is expensive and time consuming on real systems. Cloud computing delivers the benefits of using virtualisation techniques to data centers instead of physical servers for customers. However, it is still complex for researchers to test and run their experiments on data center due to the cost for repeating the experiments. To address this, various tools are available to enable simulators, emulators, mathematical models, statistical models and benchmarking. Despite this, there are different methods used by researchers to avoid the difficulty of conducting Cloud Computing research on actual large data centre infrastructure. However, it is still difficult to chose the best tool to evaluate the proposed research. This research focuses on investigating the level of accuracy of existing known simulators in the field of cloud computing. Simulation tools are generally developed for particular experiments, so there is little assurance that using them with different workloads will be reliable. Moreover, a predictive model based on a data set from a realistic data center is delivered as an alternative model of simulators as there is a lack of their sufficient accuracy. So, this work addresses the problem of investigating the accuracy of different modelling tools by developing and validating a procedure based on the performance of a target micro data centre. Key insights and contributions are: Involving three alternative models for Cloud Computing real infrastructure showing the level of accuracy of selected simulation tools. Developing and validating a predictive model based on a Raspberry Pi small scale data centre. The use of predictive model based on Linear Regression and Artificial Neural Net- works models based on training data set drawn from a Raspberry Pi Cloud infrastructure provides better accuracy
    corecore