3,268 research outputs found

    A Priority-based Fair Queuing (PFQ) Model for Wireless Healthcare System

    Get PDF
    Healthcare is a very active research area, primarily due to the increase in the elderly population that leads to increasing number of emergency situations that require urgent actions. In recent years some of wireless networked medical devices were equipped with different sensors to measure and report on vital signs of patient remotely. The most important sensors are Heart Beat Rate (ECG), Pressure and Glucose sensors. However, the strict requirements and real-time nature of medical applications dictate the extreme importance and need for appropriate Quality of Service (QoS), fast and accurate delivery of a patient’s measurements in reliable e-Health ecosystem. As the elderly age and older adult population is increasing (65 years and above) due to the advancement in medicine and medical care in the last two decades; high QoS and reliable e-health ecosystem has become a major challenge in Healthcare especially for patients who require continuous monitoring and attention. Nevertheless, predictions have indicated that elderly population will be approximately 2 billion in developing countries by 2050 where availability of medical staff shall be unable to cope with this growth and emergency cases that need immediate intervention. On the other side, limitations in communication networks capacity, congestions and the humongous increase of devices, applications and IOT using the available communication networks add extra layer of challenges on E-health ecosystem such as time constraints, quality of measurements and signals reaching healthcare centres. Hence this research has tackled the delay and jitter parameters in E-health M2M wireless communication and succeeded in reducing them in comparison to current available models. The novelty of this research has succeeded in developing a new Priority Queuing model ‘’Priority Based-Fair Queuing’’ (PFQ) where a new priority level and concept of ‘’Patient’s Health Record’’ (PHR) has been developed and integrated with the Priority Parameters (PP) values of each sensor to add a second level of priority. The results and data analysis performed on the PFQ model under different scenarios simulating real M2M E-health environment have revealed that the PFQ has outperformed the results obtained from simulating the widely used current models such as First in First Out (FIFO) and Weight Fair Queuing (WFQ). PFQ model has improved transmission of ECG sensor data by decreasing delay and jitter in emergency cases by 83.32% and 75.88% respectively in comparison to FIFO and 46.65% and 60.13% with respect to WFQ model. Similarly, in pressure sensor the improvements were 82.41% and 71.5% and 68.43% and 73.36% in comparison to FIFO and WFQ respectively. Data transmission were also improved in the Glucose sensor by 80.85% and 64.7% and 92.1% and 83.17% in comparison to FIFO and WFQ respectively. However, non-emergency cases data transmission using PFQ model was negatively impacted and scored higher rates than FIFO and WFQ since PFQ tends to give higher priority to emergency cases. Thus, a derivative from the PFQ model has been developed to create a new version namely “Priority Based-Fair Queuing-Tolerated Delay” (PFQ-TD) to balance the data transmission between emergency and non-emergency cases where tolerated delay in emergency cases has been considered. PFQ-TD has succeeded in balancing fairly this issue and reducing the total average delay and jitter of emergency and non-emergency cases in all sensors and keep them within the acceptable allowable standards. PFQ-TD has improved the overall average delay and jitter in emergency and non-emergency cases among all sensors by 41% and 84% respectively in comparison to PFQ model

    A methodological approach to BISDN signalling performance

    Get PDF
    Sophisticated signalling protocols are required to properly handle the complex multimedia, multiparty services supported by the forthcoming BISDN. The implementation feasibility of these protocols should be evaluated during their design phase, so that possible performance bottlenecks are identified and removed. In this paper we present a methodology for evaluating the performance of BISDN signalling systems under design. New performance parameters are introduced and their network-dependent values are extracted through a message flow model which has the capability to describe the impact of call and bearer control separation on the signalling performance. Signalling protocols are modelled through a modular decomposition of the seven OSI layers including the service user to three submodels. The workload model is user descriptive in the sense that it does not approximate the direct input traffic required for evaluating the performance of a layer protocol; instead, through a multi-level approach, it describes the actual implications of user signalling activity for the general signalling traffic. The signalling protocol model is derived from the global functional model of the signalling protocols and information flows using a network of queues incorporating synchronization and dependency functions. The same queueing approach is followed for the signalling transfer network which is used to define processing speed and signalling bandwidth requirements and to identify possible performance bottlenecks stemming from the realization of the related protocols

    Generalized load sharing for packet-switching networks

    Get PDF
    In this paper, we propose a framework to study how to effectively perform load sharing in multipath communication networks. A generalized load sharing (GLS) model has been developed to conceptualize how traffic is split ideally on a set of active paths. A simple traffic splitting algorithm, called weighted fair routing (WFR), has been developed at two different granularity level, namely, the packet level, and the call level, to approximate GLS with the given routing weight vector. The packet-by-packet WFR (PWFR) mimics GLS by transmitting each packet as a whole, whereas the call-by-call WFR (CWFR) imitates GLS so that all packets belonging to a single flow are sent on the same path. We have developed some performance bounds for PWFR and found that PWFR is a deterministically fair traffic splitting algorithm. This attractive property is useful in the provision of service with guaranteed performance when multiple paths can be used simultaneously to transmit packets which belong to the same flow. Our simulation studies, based on a collection of Internet backbone traces, reveal that WFR outperforms two other traffic splitting algorithms, namely, generalized round robin routing (GRR), and probabilistic routing (PRR). These promising results form a basis for designing future adaptive constraint-based multipath routing protocols.published_or_final_versio

    Delay-oriented active queue management in TCP/IP networks

    Get PDF
    PhDInternet-based applications and services are pervading everyday life. Moreover, the growing popularity of real-time, time-critical and mission-critical applications set new challenges to the Internet community. The requirement for reducing response time, and therefore latency control is increasingly emphasized. This thesis seeks to reduce queueing delay through active queue management. While mathematical studies and research simulations reveal that complex trade-off relationships exist among performance indices such as throughput, packet loss ratio and delay, etc., this thesis intends to find an improved active queue management algorithm which emphasizes delay control without trading much on other performance indices such as throughput and packet loss ratio. The thesis observes that in TCP/IP network, packet loss ratio is a major reflection of congestion severity or load. With a properly functioning active queue management algorithm, traffic load will in general push the feedback system to an equilibrium point in terms of packet loss ratio and throughput. On the other hand, queue length is a determinant factor on system delay performance while has only a slight influence on the equilibrium. This observation suggests the possibility of reducing delay while maintaining throughput and packet loss ratio relatively unchanged. The thesis also observes that queue length fluctuation is a reflection of both load changes and natural fluctuation in arriving bit rate. Monitoring queue length fluctuation alone cannot distinguish the difference and identify congestion status; and yet identifying this difference is crucial in finding out situations where average queue size and hence queueing delay can be properly controlled and reasonably reduced. However, many existing active queue management algorithms only monitor queue length, and their control policies are solely based on this measurement. In our studies, our novel finding is that the arriving bit rate distribution of all sources contains information which can be a better indication of congestion status and has a correlation with traffic burstiness. And this thesis develops a simple and scalable way to measure its two most important characteristics, namely the mean ii and the variance of the arriving rate distribution. The measuring mechanism is based on a Zombie List mechanism originally proposed and deployed in Stabilized RED to estimate the number of flows and identify misbehaving flows. This thesis modifies the original zombie list measuring mechanism, makes it capable of measuring additional variables. Based on these additional measurements, this thesis proposes a novel modification to the RED algorithm. It utilizes a robust adaptive mechanism to ensure that the system reaches proper equilibrium operating points in terms of packet loss ratio and queueing delay under various loads. Furthermore, it identifies different congestion status where traffic is less bursty and adapts RED parameters in order to reduce average queue size and hence queueing delay accordingly. Using ns-2 simulation platform, this thesis runs simulations of a single bottleneck link scenario which represents an important and popular application scenario such as home access network or SoHo. Simulation results indicate that there are complex trade-off relationships among throughput, packet loss ratio and delay; and in these relationships delay can be substantially reduced whereas trade-offs on throughput and packet loss ratio are negligible. Simulation results show that our proposed active queue management algorithm can identify circumstances where traffic is less bursty and actively reduce queueing delay with hardly noticeable sacrifice on throughput and packet loss ratio performances. In conclusion, our novel approach enables the application of adaptive techniques to more RED parameters including those affecting queue occupancy and hence queueing delay. The new modification to RED algorithm is a scalable approach and does not introduce additional protocol overhead. In general it brings the benefit of substantially reduced delay at the cost of limited processing overhead and negligible degradation in throughput and packet loss ratio. However, our new algorithm is only tested on responsive flows and a single bottleneck scenario. Its effectiveness on a combination of responsive and non-responsive flows as well as in more complicated network topology scenarios is left for future work

    IP-based virtual private networks and proportional quality of service differentiation

    Get PDF
    IP-based virtual private networks (VPNs) have the potential of delivering cost-effective, secure, and private network-like services. Having surveyed current enabling techniques, an overall picture of IP VPN implementations is presented. In order to provision the equivalent quality of service (QoS) of legacy connection-oriented layer 2 VPNs (e.g., Frame Relay and ATM), IP VPNs have to overcome the intrinsically best effort characteristics of the Internet. Subsequently, a hierarchical QoS guarantee framework for IP VPNs is proposed, stitching together development progresses from recent research and engineering work. To differentiate IP VPN QoS, the proportional QoS differentiation model, whose QoS specification granularity compromises that of IntServ and Diffserv, emerges as a potential solution. The investigation of its claimed capability of providing the predictable and controllable QoS differentiation is then conducted. With respect to the loss rate differentiation, the packet shortage phenomenon shown in two classical proportional loss rate (PLR) dropping schemes is studied. On the pursuit of a feasible solution, the potential of compromising the system resource, that is, the buffer, is ruled out; instead, an enhanced debt-aware mechanism is suggested to relieve the negative effects of packet shortage. Simulation results show that debt-aware partially curbs the biased loss rate ratios, and improves the queueing delay performance as well. With respect to the delay differentiation, the dynamic behavior of the average delay difference between successive classes is first analyzed, aiming to gain insights of system dynamics. Then, two classical delay differentiation mechanisms, that is,proportional average delay (PAD) and waiting time priority (WTP), are simulated and discussed. Based on observations on their differentiation performances over both short and long time periods, a combined delay differentiation (CDD) scheme is introduced. Simulations are utilized to validate this method. Both loss and delay differentiations are based on a series of differentiation parameters. Though previous work on the selection of delay differentiation parameters has been presented, that of loss differentiation parameters mostly relied on network operators\u27 experience. A quantitative guideline, based on the principles of queueing and optimization, is then proposed to compute loss differentiation parameters. Aside from analysis, the new approach is substantiated by numerical results

    Performance and Analysis of Transfer Control Protocol Over Voice Over Wireless Local Area Network

    Get PDF
    A thesis presented to the faculty of the College of Science and Technology at Morehead State University in partial fulfillment of the requirements for the Degree Master of Science by Rajendra Patil in August of 2008

    Real-time communication in packet-switched networks

    Full text link

    Performance modeling of web servers

    Get PDF
    A general model of a web server system comprising of the interactions between World Wide Web users and the web sites (servers) is analyzed and evaluated. Incoming requests, once admitted for processing, compete for the available resources (HTTP threads). An efficient approximate solution is provided; its accuracy is evaluated by comparing the model estimates with those obtained from simulations. The effect of several controllable parameters on the performance of the system is examined in a series of numerical and simulation experiments. In trying to understand the interactions between web users and web servers, we attempt to answer three key questions. How can we model user and server behavior on the World Wide Web ? How do users and web servers interact? Can we improve upon the ways in which web servers process incoming requests from users? In our study we formulate a queueing model for the web server and from the queueing model we obtain expressions for web server performance metrics such as average response time, throughput and blocking probability. This model will be used evaluate the suitability of web servers to prospective users of web server systems. The foreseen end users of the model are corporate decision makers who faced by a variety of several web server systems, are interested in evaluating the suitability of the servers in market. We envision a situation in which a given manager has a set of his/her own requirements or analysis of the business requirements and needs to purchase a web server that can meet the demands/requirements of the situation at hand. Hence with the users requirements and server specifications, the model could predict the best web server for the user requirements. We model the web server as an M/M/1/K queue with FCFS queueing discipline. The arrival process of HTTP requests is assumed to be Poissonian and the service discipline First come First served (FCFS). The distribution of service time is assumed to be exponential. The total number of requests that can be processed at one time is limited to K. We obtain closed form expressions for web server performance metrics such as average response time, throughput and blocking probability

    Adaptive Capacity Management in Bluetooth Networks

    Get PDF
    • 

    corecore