3 research outputs found

    Fuse-N: Framework for unified simulation environment for network-on-chip

    Full text link
    Steady advancements in semiconductor technology over the past few decades have marked incipience of Multi-Processor System-on-Chip (MPSoCs). Owing to the inability of traditional bus-based communication system to scale well with improving microchip technologies, researchers have proposed Network-on-Chip (NoC) as the on-chip communication model. Current uni-processor centric modeling methodology does not address the new design challenges introduced by MPSoCs, thus calling for efficient simulation frameworks capable of capturing the interplay between the application, the architecture, and the network. Addressing these new challenges requires a framework that assists the designer at different abstraction levels of system design; This thesis concentrates on developing a framework for unified simulation environment for NoCs (fuse-N) which simplifies the design space exploration for NoCs by offering a comprehensive simulation support. The framework synthesizes the network infrastructure and the communication model and optimizes application mapping for design constraints. The proposed framework is a hardware-software co-design implementation using SystemC 2.1 and C++. Simulation results show the architectural, network and resource allocation behavior and highlight the quantitative relationships between various design choices; Also, a novel off-line non-preemptive static Traffic Aware Scheduling (TAS) policy is proposed for hard NoC platforms. The proposed scheduling policy maps the application onto the NoC architecture keeping track of the network traffic, which is generated with every resource and communication path allocation. TAS has been evaluated for various design metrics such as application completion time, resource utilization and task throughput. Simulation results show significant improvements over traditional approaches

    New data structures, models, and algorithms for real-time resource management

    Get PDF
    Real-time resource management is the core and critical task in real-time systems. This dissertation explores new data structures, models, and algorithms for real-time resource management. At first, novel data structures, i.e., a class of Testing Interval Trees (TITs), are proposed to help build efficient scheduling modules in real-time systems. With a general data structure, i.e., the TIT* tree, the average costs of the schedulability tests in a wide variety of real-time systems can be reduced. With the Testing Interval Tree for Vacancy analysis (TIT-V), the complexities of the schedulability tests in a class of parallel/distributed real-time systems can be effectively reduced from 0(m²nlogn) to 0(mlogn+mlogm), where m is the number of processors and n is the number of tasks. Similarly, with the Testing Interval Tree for Release time and Laxity analysis (TIT-RL), the complexity of the online admission control in a uni-processor based real-time system can be reduced from 0(n²) to 0(nlogn), where n is the number of tasks. The TIT-RL tree can also be applied to a class of parallel/distributed real-time systems. Therefore, the TIT trees are effective approaches to efficient real-time scheduling modules. Secondly, a new utility accrual model, i.e., UAM+, is established for the resource management in real-time distributed systems. UAM+ is constructed based on the timeliness of computation and communication. Most importantly, the interplay between computation and communication is captured and characterized in the model. Under UAM+, resource managers are guided towards maximizing system-wide utility by exploring the interplay between computation and communication. This is in sharp contrast to traditional approaches that attempt to meet the timing constraints on computation and communication separately. To validate the effectiveness of UAM+, a resource allocation algorithm called IAUASA is developed. Simulation results reveal that IAUASA is far superior to two other resource allocation algorithms that are developed according to traditional utility accrual model and traditional idea. Furthermore, an online algorithm called IDRSA is also developed under UAM+, and a Dynamic Deadline Adjustment (DDA) technique is incorporated into IDRSA algorithm to explore the interplay between computation and communication. The simulation results show that the performance of IDRSA is very promising, especially when the interplay between computation and communication is tight. Therefore, the new utility accrual model provides a more effective approach to the resource allocation in distributed real-time systems. Thirdly, a general task model, which adapts the concept of calculus curve from the network calculus domain, is established for those embedded real-time systems with random event/task arrivals. Under this model, a prediction technique based on history window and calculus curves is established, and it provides the foundation for dynamic voltage-frequency scaling in those embedded real-time systems. Based on this prediction technique, novel energy-efficient algorithms that can dynamically adjust the operating voltage-frequency according to the predicted workload are developed. These algorithms aim to reduce energy consumption while meeting hard deadlines. They can accommodate and well adapt to the variation between the predicted and the actual arrivals of tasks as well as the variation between the predicted and the actual execution times of tasks. Simulation results validate the effectiveness of these algorithms in energy saving
    corecore