5,019 research outputs found

    The effect of real workloads and stochastic workloads on the performance of allocation and scheduling algorithms in 2D mesh multicomputers

    Get PDF
    The performance of the existing non-contiguous processor allocation strategies has been traditionally carried out by means of simulation based on a stochastic workload model to generate a stream of incoming jobs. To validate the performance of the existing algorithms, there has been a need to evaluate the algorithms' performance based on a real workload trace. In this paper, we evaluate the performance of several well-known processor allocation and job scheduling strategies based on a real workload trace and compare the results against those obtained from using a stochastic workload. Our results reveal that the conclusions reached on the relative performance merits of the allocation strategies when a real workload trace is used are in general compatible with those obtained when a stochastic workload is used

    Design issues for the Generic Stream Encapsulation (GSE) of IP datagrams over DVB-S2

    Get PDF
    The DVB-S2 standard has brought an unprecedented degree of novelty and flexibility in the way IP datagrams or other network level packets can be transmitted over DVB satellite links, with the introduction of an IP-friendly link layer - he continuous Generic Streams - and the adaptive combination of advanced error coding, modulation and spectrum management techniques. Recently approved by the DVB, the Generic Stream Encapsulation (GSE) used for carrying IP datagrams over DVBS2 implements solutions stemmed from a design rationale quite different from the one behind IP encapsulation schemes over its predecessor DVB-S. This paper highlights GSE's original design choices under the perspective of DVB-S2's innovative features and possibilities

    Operational and Performance Issues of a CBQ router

    Get PDF
    The use of scheduling mechanisms like Class Based Queueing (CBQ) is expected to play a key role in next generation multiservice IP networks. In this paper we attempt an experimental evaluation of ALTQ/CBQ demonstrating its sensitivity to a wide range of parameters and link layer driver design issues. We pay attention to several CBQ internal parameters that affect performance drastically and particularly to “borrowing”, a key feature for flexible and efficient link sharing. We are also investigating cases where the link sharing rules are violated, explaining and correcting these effects wheneverpossible. Finally we evaluateCBQ performance and make suggestions for effective deployment in real networks.

    An efficient processor allocation strategy that maintains a high degree of contiguity among processors in 2D mesh connected multicomputers

    Get PDF
    Two strategies are used for the allocation of jobs to processors connected by mesh topologies: contiguous allocation and non-contiguous allocation. In non-contiguous allocation, a job request can be split into smaller parts that are allocated to non-adjacent free sub-meshes rather than always waiting until a single sub-mesh of the requested size and shape is available. Lifting the contiguity condition is expected to reduce processor fragmentation and increase system utilization. However, the distances traversed by messages can be long, and as a result the communication overhead, especially contention, is increased. The extra communication overhead depends on how the allocation request is partitioned and assigned to free sub-meshes. This paper presents a new Non-contiguous allocation algorithm, referred to as Greedy-Available-Busy-List (GABL for short), which can decrease the communication overhead among processors allocated to a given job. The simulation results show that the new strategy can reduce the communication overhead and substantially improve performance in terms of parameters such as job turnaround time and system utilization. Moreover, the results reveal that the Shortest-Service-Demand-First (SSD) scheduling strategy is much better than the First-Come-First-Served (FCFS) scheduling strategy

    Evolving SDN for Low-Power IoT Networks

    Get PDF
    Software Defined Networking (SDN) offers a flexible and scalable architecture that abstracts decision making away from individual devices and provides a programmable network platform. However, implementing a centralized SDN architecture within the constraints of a low-power wireless network faces considerable challenges. Not only is controller traffic subject to jitter due to unreliable links and network contention, but the overhead generated by SDN can severely affect the performance of other traffic. This paper addresses the challenge of bringing high-overhead SDN architecture to IEEE 802.15.4 networks. We explore how traditional SDN needs to evolve in order to overcome the constraints of low-power wireless networks, and discuss protocol and architectural optimizations necessary to reduce SDN control overhead - the main barrier to successful implementation. We argue that interoperability with the existing protocol stack is necessary to provide a platform for controller discovery and coexistence with legacy networks. We consequently introduce {\mu}SDN, a lightweight SDN framework for Contiki, with both IPv6 and underlying routing protocol interoperability, as well as optimizing a number of elements within the SDN architecture to reduce control overhead to practical levels. We evaluate {\mu}SDN in terms of latency, energy, and packet delivery. Through this evaluation we show how the cost of SDN control overhead (both bootstrapping and management) can be reduced to a point where comparable performance and scalability is achieved against an IEEE 802.15.4-2012 RPL-based network. Additionally, we demonstrate {\mu}SDN through simulation: providing a use-case where the SDN configurability can be used to provide Quality of Service (QoS) for critical network flows experiencing interference, and we achieve considerable reductions in delay and jitter in comparison to a scenario without SDN

    The Xpress Transfer Protocol (XTP): A tutorial (expanded version)

    Get PDF
    The Xpress Transfer Protocol (XTP) is a reliable, real-time, light weight transfer layer protocol. Current transport layer protocols such as DoD's Transmission Control Protocol (TCP) and ISO's Transport Protocol (TP) were not designed for the next generation of high speed, interconnected reliable networks such as fiber distributed data interface (FDDI) and the gigabit/second wide area networks. Unlike all previous transport layer protocols, XTP is being designed to be implemented in hardware as a VLSI chip set. By streamlining the protocol, combining the transport and network layers and utilizing the increased speed and parallelization possible with a VLSI implementation, XTP will be able to provide the end-to-end data transmission rates demanded in high speed networks without compromising reliability and functionality. This paper describes the operation of the XTP protocol and in particular, its error, flow and rate control; inter-networking addressing mechanisms; and multicast support features, as defined in the XTP Protocol Definition Revision 3.4
    • 

    corecore