1,698 research outputs found

    Just Queuing: Policy-Based Scheduling Mechanism for Packet Switching Networks

    Get PDF
    The pervasiveness of the Internet and its applications lead to the potential increment of the users’ demands for more services with economical prices. The diversity of Internet traffic requires some classification and prioritisation since some traffic deserve much attention with less delay and loss compared to others. Current scheduling mechanisms are exposed to the trade-off between three major properties namely fairness, complexity and protection. Therefore, the question remains about how to improve the fairness and protection with less complex implementation. This research is designed to enhance scheduling mechanism by providing sustainability to the fairness and protection properties with simplicity in implementation; and hence higher service quality particularly for real-time applications. Extra elements are applied to the main fairness equation to improve the fairness property. This research adopts the restricted charge policy which imposes the protection of normal user. In terms of the complexity property, genetic algorithm has an advantage in holding the fitness score of the queue in separate storage space which potentially minimises the complexity of the algorithm. The integrity between conceptual, analytical and experimental approach verifies the efficiency of the proposed mechanism. The proposed mechanism is validated by using the emulation and the validation experiments involve real router flow data. The results of the evaluation showed fair bandwidth distribution similar to the popular Weighted Fair Queuing (WFQ) mechanism. Furthermore, better protection was exhibited in the results compared with the WFQ and two other scheduling mechanisms. The complexity of the proposed mechanism reached O(log(n)) which is considered as potentially low. Furthermore, this mechanism is limited to the wired networks and hence future works could improve the mechanism to be adopted in mobile ad-hoc networks or any other wireless networks. Moreover, more improvements could be applied to the proposed mechanism to enhance its deployment in the virtual circuits switching network such as the asynchronous transfer mode networks

    Performance analysis of downlink shared channels in a UMTS network

    Get PDF
    In light of the expected growth in wireless data communications and the commonly anticipated up/downlink asymmetry, we present a performance analysis of downlink data transfer over \textsc{d}ownlink \textsc{s}hared \textsc{ch}annels (\textsc{dsch}s), arguably the most efficient \textsc{umts} transport channel for medium-to-large data transfers. It is our objective to provide qualitative insight in the different aspects that influence the data \textsc{q}uality \textsc{o}f \textsc{s}ervice (\textsc{qos}). As a most principal factor, the data traffic load affects the data \textsc{qos} in two distinct manners: {\em (i)} a heavier data traffic load implies a greater competition for \textsc{dsch} resources and thus longer transfer delays; and {\em (ii)} since each data call served on a \textsc{dsch} must maintain an \textsc{a}ssociated \textsc{d}edicated \textsc{ch}annel (\textsc{a}-\textsc{dch}) for signalling purposes, a heavier data traffic load implies a higher interference level, a higher frame error rate and thus a lower effective aggregate \textsc{dsch} throughput: {\em the greater the demand for service, the smaller the aggregate service capacity.} The latter effect is further amplified in a multicellular scenario, where a \textsc{dsch} experiences additional interference from the \textsc{dsch}s and \textsc{a}-\textsc{dch}s in surrounding cells, causing a further degradation of its effective throughput. Following an insightful two-stage performance evaluation approach, which segregates the interference aspects from the traffic dynamics, a set of numerical experiments is executed in order to demonstrate these effects and obtain qualitative insight in the impact of various system aspects on the data \textsc{qos}

    A Priority-based Fair Queuing (PFQ) Model for Wireless Healthcare System

    Get PDF
    Healthcare is a very active research area, primarily due to the increase in the elderly population that leads to increasing number of emergency situations that require urgent actions. In recent years some of wireless networked medical devices were equipped with different sensors to measure and report on vital signs of patient remotely. The most important sensors are Heart Beat Rate (ECG), Pressure and Glucose sensors. However, the strict requirements and real-time nature of medical applications dictate the extreme importance and need for appropriate Quality of Service (QoS), fast and accurate delivery of a patient’s measurements in reliable e-Health ecosystem. As the elderly age and older adult population is increasing (65 years and above) due to the advancement in medicine and medical care in the last two decades; high QoS and reliable e-health ecosystem has become a major challenge in Healthcare especially for patients who require continuous monitoring and attention. Nevertheless, predictions have indicated that elderly population will be approximately 2 billion in developing countries by 2050 where availability of medical staff shall be unable to cope with this growth and emergency cases that need immediate intervention. On the other side, limitations in communication networks capacity, congestions and the humongous increase of devices, applications and IOT using the available communication networks add extra layer of challenges on E-health ecosystem such as time constraints, quality of measurements and signals reaching healthcare centres. Hence this research has tackled the delay and jitter parameters in E-health M2M wireless communication and succeeded in reducing them in comparison to current available models. The novelty of this research has succeeded in developing a new Priority Queuing model ‘’Priority Based-Fair Queuing’’ (PFQ) where a new priority level and concept of ‘’Patient’s Health Record’’ (PHR) has been developed and integrated with the Priority Parameters (PP) values of each sensor to add a second level of priority. The results and data analysis performed on the PFQ model under different scenarios simulating real M2M E-health environment have revealed that the PFQ has outperformed the results obtained from simulating the widely used current models such as First in First Out (FIFO) and Weight Fair Queuing (WFQ). PFQ model has improved transmission of ECG sensor data by decreasing delay and jitter in emergency cases by 83.32% and 75.88% respectively in comparison to FIFO and 46.65% and 60.13% with respect to WFQ model. Similarly, in pressure sensor the improvements were 82.41% and 71.5% and 68.43% and 73.36% in comparison to FIFO and WFQ respectively. Data transmission were also improved in the Glucose sensor by 80.85% and 64.7% and 92.1% and 83.17% in comparison to FIFO and WFQ respectively. However, non-emergency cases data transmission using PFQ model was negatively impacted and scored higher rates than FIFO and WFQ since PFQ tends to give higher priority to emergency cases. Thus, a derivative from the PFQ model has been developed to create a new version namely “Priority Based-Fair Queuing-Tolerated Delay” (PFQ-TD) to balance the data transmission between emergency and non-emergency cases where tolerated delay in emergency cases has been considered. PFQ-TD has succeeded in balancing fairly this issue and reducing the total average delay and jitter of emergency and non-emergency cases in all sensors and keep them within the acceptable allowable standards. PFQ-TD has improved the overall average delay and jitter in emergency and non-emergency cases among all sensors by 41% and 84% respectively in comparison to PFQ model

    Delay Considerations for Opportunistic Scheduling in Broadcast Fading Channels

    Get PDF
    We consider a single-antenna broadcast block fading channel with n users where the transmission is packetbased. We define the (packet) delay as the minimum number of channel uses that guarantees all n users successfully receive m packets. This is a more stringent notion of delay than average delay and is the worst case (access) delay among the users. A delay optimal scheduling scheme, such as round-robin, achieves the delay of mn. For the opportunistic scheduling (which is throughput optimal) where the transmitter sends the packet to the user with the best channel conditions at each channel use, we derive the mean and variance of the delay for any m and n. For large n and in a homogeneous network, it is proved that the expected delay in receiving one packet by all the receivers scales as n log n, as opposed to n for the round-robin scheduling. We also show that when m grows faster than (log n)^r, for some r > 1, then the delay scales as mn. This roughly determines the timescale required for the system to behave fairly in a homogeneous network. We then propose a scheme to significantly reduce the delay at the expense of a small throughput hit. We further look into the advantage of multiple transmit antennas on the delay. For a system with M antennas in the transmitter where at each channel use packets are sent to M different users, we obtain the expected delay in receiving one packet by all the users

    On the impact of heterogeneity and back-end scheduling in load balancing designs

    Get PDF
    Load balancing is a common approach for task assignment in distributed architectures. In this paper, we show that the degree of inefficiency in load balancing designs is highly dependent on the scheduling discipline used at each of the backend servers. Traditionally, the back-end scheduler can be modeled as Processor Sharing (PS), in which case the degree of inefficiency grows linearly with the number of servers. However, if the back-end scheduler is changed to Shortest Remaining Processing Time (SRPT), the degree of inefficiency can be independent of the number of servers, instead depending only on the heterogeneity of the speeds of the servers. Further, switching the back-end scheduler to SRPT can provide significant improvements in the overall mean response time of the system as long as the heterogeneity of the server speeds is small
    • 

    corecore