2,184 research outputs found

    Modelling the IEEE 802.11 wireless MAC layer under heterogeneous VoIP traffic to evaluate and dimension QoE

    Get PDF
    PhDAs computers become more popular in the home and workplace, sharing resources and Internet access locally is a necessity. The simplest method of choice is by deploying a Wireless Local Area Network; they are inexpensive, easy to configure and require minimal infrastructure. The wireless local area network of choice is the IEEE 802.11 standard; IEEE 802.11, however, is now being implemented on larger scales outside of the original scope of usage. The realistic usage spans from small scale home solutions to commercial ‘hot spots,’ providing access within medium size areas such as cafés, and more recently blanket coverage in metropolitan. Due to increasing Internet availability and faster network access, in both wireless and wired, the concept of using such networks for real-time services such as internet telephony is also becoming popular. IEEE 802.11 wireless access is shared with many clients on a single channel and there are three non-overlapping channels available. As more stations communicate on a single channel there is increased contention resulting in longer delays due to the backoff overhead of the IEEE 802.11 protocol and hence loss and delay variation; not desirable for time critical traffic. Simulation of such networks demands super-computing resource, particularly where there are over a dozen clients on a given. Fortunately, the author has access to the UK’s super computers and therefore a clear motivation to develop a state of the art analytical model with the required resources to validate. The goal was to develop an analytical model to deal with realistic IEEE 802.11 deployments and derive results without the need for super computers. A network analytical model is derived to model the characteristics of the IEEE 802.11 protocol from a given scenario, including the number of clients and the traffic load of each. The model is augmented from an existing published saturated case, where each client is assumed to always have traffic to transmit. The nature of the analytical model is to allow stations to have a variable load, which is achieved by modifying the existing models and then to allow stations to operate with different traffic profiles. The different traffic profiles, for each station, is achieved by using the augmented model state machine per station and distributing the probabilities to each station’s state machine accordingly. To address the gap between the analytical models medium access delay and standard network metrics which include the effects of buffering traffic, a queueing model is identified and augmented which transforms the medium access delay into standard network metrics; delay, loss and jitter. A Quality of Experience framework, for both computational and analytical results, is investigated to allow the results to be represented as user perception scores and the acceptable voice call carrying capacity found. To find the acceptable call carrying capacity, the ITU-T G.107 E-Model is employed which can be used to give each client a perception rating in terms of user satisfaction. PAGE 4 OF 162 QUEEN MARY, UNIVERSITY OF LONDON OLIVER SHEPHERD With the use of a novel framework, benchmarking results show that there is potential to maximise the number of calls carried by the network with an acceptable user perception rating. Dimensioning of the network is undertaken, again compared with simulation from the super computers, to highlight the usefulness of the analytical model and framework and provides recommendations for network configurations, particularly for the latest Wireless Multimedia extensions available in IEEE 802.11. Dimensioning shows an overall increase of acceptable capacity of 43%; from 7 to 10 bidirectional calls per Access Point by using a tuned transmission opportunity to allow each station to send 4 packets per transmission. It is found that, although the accuracy of the results from the analytical model is not precise, the model achieves a 1 in 13,000 speed up compared to simulation. Results show that the point of maximum calls comes close to simulation with the analytical model and framework and can be used as a guide to configure the network. Alternatively, for specific capacity figures, the model can be used to home-in on the optimal region for further experiments and therefore achievable with standard computational resource, i.e. desktop machines

    Fault detection and correction modeling of software systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Statistical Analysis of a Telephone Call Center: A Queueing-Science Perspective

    Get PDF
    A call center is a service network in which agents provide telephone-based services. Customers that seek these services are delayed in tele-queues. This paper summarizes an analysis of a unique record of call center operations. The data comprise a complete operational history of a small banking call center, call by call, over a full year. Taking the perspective of queueing theory, we decompose the service process into three fundamental components: arrivals, customer abandonment behavior and service durations. Each component involves different basic mathematical structures and requires a different style of statistical analysis. Some of the key empirical results are sketched, along with descriptions of the varied techniques required. Several statistical techniques are developed for analysis of the basic components. One of these is a test that a point process is a Poisson process. Another involves estimation of the mean function in a nonparametric regression with lognormal errors. A new graphical technique is introduced for nonparametric hazard rate estimation with censored data. Models are developed and implemented for forecasting of Poisson arrival rates. We then survey how the characteristics deduced from the statistical analyses form the building blocks for theoretically interesting and practically useful mathematical models for call center operations. Key Words: call centers, queueing theory, lognormal distribution, inhomogeneous Poisson process, censored data, human patience, prediction of Poisson rates, Khintchine-Pollaczek formula, service times, arrival rate, abandonment rate, multiserver queues.

    Analysis of costs and delivery intervals for multiple-release software

    Get PDF
    Project managers of large software projects, and particularly those associated with Internet Business-to-Business (B2B) or Business-to-Customer (B2C) applications, are under pressure to capture market share by delivering reliable software with cost and timing constraints. An earlier delivery time may help the E-commerce software capture a larger market share. However, early delivery sometimes means lower quality. In addition, most of the time the scale of the software is so large that incremental multiple releases are warranted. A Multiple-Release methodology has been developed to optimize the most efficient and effective delivery intervals of the various releases of software products, taking into consideration software costs and reliability. The Multiple-Release methodology extends existing software cost and reliability models, meets the needs of large software development firms, and gives a navigation guide to software industrial managers. The main decision factors for the multiple releases include the delivery interval of each release, the market value of the features in the release, and the software costs associated with testing and error penalties. The input of these factors was assessing using Design of Experiments (DOE). The costs included in the research are based on a salary survey of software staff at companies in the New Jersey area and on budgets of software development teams. The Activity Based Cost (ABC) method was used to determine costs on the basis of job functions associated with the development of the software. It is assumed that the error data behavior follows the Non-Homogeneous Poisson Processes (NHPP)

    Developing service supply chains by using agent based simulation

    Get PDF
    The Master thesis present a novel approach to model a service supply chain with agent based simulation. Also, the case study of thesis is related to healthcare services and research problem includes facility location of healthcare centers in Vaasa region by considering the demand, resource units and service quality. Geographical information system is utilized for locating population, agent based simulation for patients and their illness status probability, and discrete event simulation for healthcare services modelling. Health centers are located on predefined sites based on managers’ preference, then each patient based on the distance to health centers, move to the nearest point for receiving the healthcare services. For evaluating cost and services condition, various key performance indicators have defined in the modelling such as Number of patient in queue, patients waiting time, resource utilization, and number of patients ratio yielded by different of inflow and outflow. Healthcare managers would be able to experiment different scenarios based on changing number of resource units or location of healthcare centers, and subsequently evaluate the results without necessity of implementation in real life.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Identification of cost-effective pavement management systems strategies a reliable tool to enhance pavement management implementations

    Get PDF
    Modeling asset deterioration is a key business process within Transportation Asset Management. Road agencies should budget a large amount of public money to reduce the number of accidents and achieve a high level of service of the road system. Managing and preserving those investments is crucial, even more in the actual panorama of limiting funding. Therefore, roadway agencies have to increase their efforts on monitoring pavement networks and implementing data processing tools to promote cost-effective Pavement Management System (PMS) strategies. A comprehensive PMS database, in fact, ensures reliable decisions based on survey data and sets rules and procedures to analyze data systematically. However, the development of adequate pavement deterioration prediction models has proven to be difficult, because of the high variability and uncertainty in data collection and interpretation, and because of the large quantity of data information from a wide variety of sources to be processed. This research proposes a comprehensive methodology to design and implement pavement management strategies at the network level, based on road agency local conditions. Such methodology includes the identification of suitable indexes for the pavement condition assessment, the design of strategies to collect pavement data for the agency maintenance systems, the development of data quality and data cleansing criteria to support data processing and, at last, the implementation spatial location procedures to integrate pavement data involved in the comprehensive PMS. This work develops network-level pavement deterioration models, and reviews road agency preservation policies, to evaluate the effectiveness of maintenance treatment, which is essential for a cost-effective PMS. It is expected that the resulting methodology and the developed applications, product of this research, will constitute a reliable tool to support agencies in their effort to implement their PMS

    Development of Hotzone Identification Models for Simultaneous Crime and Collision Reduction

    Get PDF
    This research contributes to developing macro-level crime and collision prediction models using a new method designed to handle the problem of spatial dependency and over-dispersion in zonal data. A geographically weighted Poisson regression (GWPR) model and geographically weighted negative binomial regression (GWNBR) model were used for crime and collision prediction. Five years (2009-2013) of crime, collision, traffic, socio-demographic, road inventory, and land use data for Regina, Saskatchewan, Canada were used. The need for geographically weighted models became clear when Moran's I local indicator test showed statistically significant levels of spatial dependency. A bandwidth is a required input for geographically weighted regression models. This research tested two bandwidths: 1) fixed Gaussian and 2) adaptive bi-square bandwidth and investigated which was better suited to the study's database. Three crime models were developed: violent, non-violent and total crimes. Three collision models were developed: fatal-injury, property damage only and total collisions. The models were evaluated using seven goodness of fit (GOF) tests: 1) Akaike Information Criterion, 2) Bayesian Information Criteria, 3) Mean Square Error, 4) Mean Square Prediction Error, 5) Mean Prediction Bias, and 6) Mean Absolute Deviation. As the seven GOF tests did not produce consistent results, the cumulative residual (CURE) plot was explored. The CURE plots showed that the GWPR and GWNBR model using fixed Gaussian bandwidth was the better approach for predicting zonal level crimes and collisions in Regina. The GWNBR model has the important advantage that can be used with the empirical Bayes technique to further enhance prediction accuracy. The GWNBR crime and collision prediction models were used to identify crime and collision hotzones for simultaneous crime and collision reduction in Regina. The research used total collision and total crimes to demonstrate the determination of priority zones for focused law enforcement in Regina. Four enforcement priority zones were identified. These zones cover only 1.4% of the Citys area but account for 10.9% of total crimes and 5.8% of total collisions. The research advances knowledge by examining hotzones at a macro-level and suggesting zones where enforcement and planning for enforcement are likely to be most effective and efficient

    DECISION SUPPORT MODEL IN FAILURE-BASED COMPUTERIZED MAINTENANCE MANAGEMENT SYSTEM FOR SMALL AND MEDIUM INDUSTRIES

    Get PDF
    Maintenance decision support system is crucial to ensure maintainability and reliability of equipments in production lines. This thesis investigates a few decision support models to aid maintenance management activities in small and medium industries. In order to improve the reliability of resources in production lines, this study introduces a conceptual framework to be used in failure-based maintenance. Maintenance strategies are identified using the Decision-Making Grid model, based on two important factors, including the machines’ downtimes and their frequency of failures. The machines are categorized into three downtime criterions and frequency of failures, which are high, medium and low. This research derived a formula based on maintenance cost, to re-position the machines prior to Decision-Making Grid analysis. Subsequently, the formula on clustering analysis in the Decision-Making Grid model is improved to solve multiple-criteria problem. This research work also introduced a formula to estimate contractor’s response and repair time. The estimates are used as input parameters in the Analytical Hierarchy Process model. The decisions were synthesized using models based on the contractors’ technical skills such as experience in maintenance, skill to diagnose machines and ability to take prompt action during troubleshooting activities. Another important criteria considered in the Analytical Hierarchy Process is the business principles of the contractors, which includes the maintenance quality, tools, equipments and enthusiasm in problem-solving. The raw data collected through observation, interviews and surveys in the case studies to understand some risk factors in small and medium food processing industries. The risk factors are analysed with the Ishikawa Fishbone diagram to reveal delay time in machinery maintenance. The experimental studies are conducted using maintenance records in food processing industries. The Decision Making Grid model can detect the top ten worst production machines on the production lines. The Analytical Hierarchy Process model is used to rank the contractors and their best maintenance practice. This research recommends displaying the results on the production’s indicator boards and implements the strategies on the production shop floor. The proposed models can be used by decision makers to identify maintenance strategies and enhance competitiveness among contractors in failure-based maintenance. The models can be programmed as decision support sub-procedures in computerized maintenance management systems

    Spatial-temporal Distribution of Mosquito Larval Hot Spots in Papoli, Uganda: A Community-Based Approach to Mosquito Control

    Get PDF
    Mosquito species of the Anopheles gambaie complex are the predominant vectors of malaria transmission throughout sub-Saharan Africa. These mosquitoes tend to be endophilic, as well as anthropophilic, making them prime candidates for disease transmission. Within the same region, related mosquito vectors play a significant role in the transmission of additional human and zoonotic diseases. Furthermore, mosquito nuisance biting is an immense issue that cannot be ignored in terms of its impact on African communities. Depending on the respective factors involved, mosquito control programs throughout the continent have attempted to tackle these issues in a multitude of ways. This research approached the issue by developing and integrating an American-style mosquito control district within the eastern Ugandan community of Papoli. The basic structure of such a district was blended with a community-based approach, employing local community members and leaders, thus ensuring an effective and sustainable program. A guide detailing all aspects and steps needed to properly develop and implement such a program is outlined
    corecore