70 research outputs found

    Interference Queueing Networks on Grids

    Full text link
    Consider a countably infinite collection of interacting queues, with a queue located at each point of the dd-dimensional integer grid, having independent Poisson arrivals, but dependent service rates. The service discipline is of the processor sharing type,with the service rate in each queue slowed down, when the neighboring queues have a larger workload. The interactions are translation invariant in space and is neither of the Jackson Networks type, nor of the mean-field type. Coupling and percolation techniques are first used to show that this dynamics has well defined trajectories. Coupling from the past techniques are then proposed to build its minimal stationary regime. The rate conservation principle of Palm calculus is then used to identify the stability condition of this system, where the notion of stability is appropriately defined for an infinite dimensional process. We show that the identified condition is also necessary in certain special cases and conjecture it to be true in all cases. Remarkably, the rate conservation principle also provides a closed form expression for the mean queue size. When the stability condition holds, this minimal solution is the unique translation invariant stationary regime. In addition, there exists a range of small initial conditions for which the dynamics is attracted to the minimal regime. Nevertheless, there exists another range of larger though finite initial conditions for which the dynamics diverges, even though stability criterion holds.Comment: Minor Spell Change

    Estimating Self-Sustainability in Peer-to-Peer Swarming Systems

    Full text link
    Peer-to-peer swarming is one of the \emph{de facto} solutions for distributed content dissemination in today's Internet. By leveraging resources provided by clients, swarming systems reduce the load on and costs to publishers. However, there is a limit to how much cost savings can be gained from swarming; for example, for unpopular content peers will always depend on the publisher in order to complete their downloads. In this paper, we investigate this dependence. For this purpose, we propose a new metric, namely \emph{swarm self-sustainability}. A swarm is referred to as self-sustaining if all its blocks are collectively held by peers; the self-sustainability of a swarm is the fraction of time in which the swarm is self-sustaining. We pose the following question: how does the self-sustainability of a swarm vary as a function of content popularity, the service capacity of the users, and the size of the file? We present a model to answer the posed question. We then propose efficient solution methods to compute self-sustainability. The accuracy of our estimates is validated against simulation. Finally, we also provide closed-form expressions for the fraction of time that a given number of blocks is collectively held by peers.Comment: 27 pages, 5 figure

    Finding and Mitigating Geographic Vulnerabilities in Mission Critical Multi-Layer Networks

    Get PDF
    Title from PDF of title page, viewed on June 20, 2016Dissertation advisor: Cory BeardVitaIncludes bibliographical references (pages 232-257)Thesis(Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2016In Air Traffic Control (ATC), communications outages may lead to immediate loss of communications or radar contact with aircraft. In the short term, there may be safety related issues as important services including power systems, ATC, or communications for first responders during a disaster may be out of service. Significant financial damage from airline delays and cancellations may occur in the long term. This highlights the different types of impact that may occur after a disaster or other geographic event. The question is How do we evaluate and improve the ability of a mission-critical network to perform its mission during geographically correlated failures? To answer this question, we consider several large and small networks, including a multi-layer ATC Service Oriented Architecture (SOA) network known as SWIM. This research presents a number of tools to analyze and mitigate both long and short term geographic vulnerabilities in mission critical networks. To provide context for the tools, a disaster planning approach is presented that focuses on Resiliency Evaluation, Provisioning Demands, Topology Design, and Mitigation of Vulnerabilities. In the Resilience Evaluation, we propose a novel metric known as the Network Impact Resilience (NIR) metric and a reduced state based algorithm to compute the NIR known as the Self-Pruning Network State Generation (SP-NSG) algorithm. These tools not only evaluate the resiliency of a network with a variety of possible network tests, but they also identify geographic vulnerabilities. Related to the Demand Provisioning and Mitigation of Vulnerabilities, we present methods that focus on provisioning in preparation for rerouting of demands immediately following an event based on Service Level Agreements (SLA) and fast rerouting of demands around geographic vulnerabilities using Multi-Topology Routing (MTR). The Topology Design area focuses on adding nodes to improve topologies to be more resistant to geographic vulnerabilities. Additionally, a set of network performance tools are proposed for use with mission critical networks that can model at least up to 2nd order network delay statistics. The first is an extension of the Queueing Network Analyzer (QNA) to model multi-layer networks (and specifically SOA networks). The second is a network decomposition tool based on Linear Algebraic Queueing Theory (LAQT). This is one of the first extensive uses of LAQT for network modeling. Benefits, results, and limitations of both methods are described.Introduction -- SWIM Network - Air traffic Control example -- Performance analysis of mission critical multi-layer networks -- Evaluation of geographically correlated failures in multi-layer networks -- Provisioning and restoral of mission critical services for disaster resilience -- Topology improvements to avoid high impact geographic events -- Routing of mission critical services during disasters -- Conclusions and future research -- Appendix A. Pub/Sub simulation model description -- Appendix B. ME Random Number Generatio

    Twentieth conference on stochastic processes and their applications

    Get PDF

    Prediction of ATM multiplexer performance by simulation and analysis of a model of packetized voice traffic

    Get PDF

    Information-theoretic analysis of human-machine mixed systems

    Get PDF
    Many recent information technologies such as crowdsourcing and social decision-making systems are designed based on (near-)optimal information processing techniques for machines. However, in such applications, some parts of systems that process information are humans and so systems are affected by bounded rationality of human behavior and overall performance is suboptimal. In this dissertation, we consider systems that include humans and study their information-theoretic limits. We investigate four problems in this direction and show fundamental limits in terms of capacity, Bayes risk, and rate-distortion. A system with queue-length-dependent service quality, motivated by crowdsourcing platforms, is investigated. Since human service quality changes depending on workload, a job designer must take the level of work into account. We model the workload using queueing theory and characterize Shannon's information capacity for single-user and multiuser systems. We also investigate social learning as sequential binary hypothesis testing. We find somewhat counterintuitively that unlike basic binary hypothesis testing, the decision threshold determined by the true prior probability is no longer optimal and biased perception of the true prior could outperform the unbiased perception system. The fact that the optimal belief curve resembles the Prelec weighting function from cumulative prospect theory gives insight, in the era of artificial intelligence (AI), into how to design machine AI that supports a human decision. The traditional CEO problem well models a collaborative decision-making problem. We extend the CEO problem to two continuous alphabet settings with general rth power of difference and logarithmic distortions, and study matching asymptotics of distortion as the number of agents and sum rate grow without bound

    Scalable Load Balancing Algorithms in Networked Systems

    Get PDF
    A fundamental challenge in large-scale networked systems viz., data centers and cloud networks is to distribute tasks to a pool of servers, using minimal instantaneous state information, while providing excellent delay performance. In this thesis we design and analyze load balancing algorithms that aim to achieve a highly efficient distribution of tasks, optimize server utilization, and minimize communication overhead.Comment: Ph.D. thesi

    Glosarium Matematika

    Get PDF
    273 p.; 24 cm
    corecore