280 research outputs found

    Estimation of the statistics of rare events in data communications systems

    No full text
    There are many examples of rare events in telecommunications systems, including buffer overflows in queueing systems, cycle slipping in phase-locked loops and the escape of adaptive equalizers from local (possibly incorrect) equilibria. In many cases, factors such as the high cost of the occurrence of these events mean that their statistics are of interest in spite of their rarity. The estimation of the statistics of these rare events via direct simulation is very time consuming, simply because of their rarity. In fact, the required simulation time can be so high as to make simulation not just difficult, but impossible. One technique that can be used to speed up simulations of rare events is importance sampling, in which the statistics of the event in which we are interested are inferred from the statistics (obtained by simulation) of some (less rare) event in a different system. Because the events are less rare, the simulation time is reduced. However, there remains the problem of maximizing the speedup to ensure that the simulation time is minimized. It has been shown previously that as the rarity of the events increases, large deviations theory can be used to create a simulation system that is optimal in the sense of minimizing the variance of a probability estimator in the simulation of a rare event. In this thesis, we extend these results, and also apply them to a number of specific applications for which we obtain analytic expressions for an asymptotically optimal simulation system. Examples studied include multiple-priority data streams and a number of queues with deterministic servers, which can be used in the modeling of asynchronous transfer mode (ATM) switches. In the case of buffer overflows in queueing systems, it will be shown that the required simulation time is reduced from being exponential in the buffer size for direct simulation , to being linear in the buffer size using the asymptotically optimal simulation system, and that this holds even for relatively small buffer sizes. While much of the previous work on fast simulation of rare events has concentrated on the use of large deviations and expon-ential changes of measure, we look beyond this class, and show that it is possible to obtain larger increases in simulation speed, using, for example, the reverse-time model of the system being studied. In fact , it is possible to obtain an infinite speedup. However, doing this may require omniscience, i.e. effectively knowing the answer before we start. In addition to the investigation of methods for performing fast simulation, the relationship between optimal control, large deviations and reverse-time modeling is explored, with particular reference to rare events. It is shown that, in addition to the previously known relationship between optimal control and large deviations, a similar relationship exists between optimal control and reverse-time modeling, in which the trajectory defining the solution of the optimal control problem in which control energy is minimized defines the mean path of the reverse-time model of the process

    Message-driven dynamics

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (p. 251-260).by Richard Anton Lethin.Ph.D

    Performance Modelling and Optimisation of Multi-hop Networks

    Get PDF
    A major challenge in the design of large-scale networks is to predict and optimise the total time and energy consumption required to deliver a packet from a source node to a destination node. Examples of such complex networks include wireless ad hoc and sensor networks which need to deal with the effects of node mobility, routing inaccuracies, higher packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the computational limitations of the nodes. They also include more reliable communication environments, such as wired networks, that are susceptible to random failures, security threats and malicious behaviours which compromise their quality of service (QoS) guarantees. In such networks, packets traverse a number of hops that cannot be determined in advance and encounter non-homogeneous network conditions that have been largely ignored in the literature. This thesis examines analytical properties of packet travel in large networks and investigates the implications of some packet coding techniques on both QoS and resource utilisation. Specifically, we use a mixed jump and diffusion model to represent packet traversal through large networks. The model accounts for network non-homogeneity regarding routing and the loss rate that a packet experiences as it passes successive segments of a source to destination route. A mixed analytical-numerical method is developed to compute the average packet travel time and the energy it consumes. The model is able to capture the effects of increased loss rate in areas remote from the source and destination, variable rate of advancement towards destination over the route, as well as of defending against malicious packets within a certain distance from the destination. We then consider sending multiple coded packets that follow independent paths to the destination node so as to mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium and obtain the time-dependent properties of the packet’s travel process, allowing us to compare the merits and limitations of coding, both in terms of delivery times and energy efficiency. Finally, we propose models that can assist in the analysis and optimisation of the performance of inter-flow network coding (NC). We analyse two queueing models for a router that carries out NC, in addition to its standard packet routing function. The approach is extended to the study of multiple hops, which leads to an optimisation problem that characterises the optimal time that packets should be held back in a router, waiting for coding opportunities to arise, so that the total packet end-to-end delay is minimised

    System Stability Under Adversarial Injection of Dependent Tasks

    Get PDF
    Technological changes (NFV, Osmotic Computing, Cyber-physical Systems) are making very important devising techniques to efficiently run a flow of jobs formed by dependent tasks in a set of servers. These problem can be seen as generalizations of the dynamic job-shop scheduling problem, with very rich dependency patterns and arrival assumptions. In this work, we consider a computational model of a distributed system formed by a set of servers in which jobs, that are continuously arriving, have to be executed. Every job is formed by a set of dependent tasks (i. e., each task may have to wait for others to be completed before it can be started), each of which has to be executed in one of the servers. The arrival of jobs and their properties is assumed to be controlled by a bounded adversary, whose only restriction is that it cannot overload any server. This model is a non-trivial generalization of the Adversarial Queuing Theory model of Borodin et al., and, like that model, focuses on the stability of the system: whether the number of jobs pending to be completed is bounded at all times. We show multiple results of stability and instability for this adversarial model under different combinations of the scheduling policy used at the servers, the arrival rate, and the dependence between tasks in the jobs

    Queues with Congestion-dependent Feedback

    Get PDF
    This dissertation expands the theory of feedback queueing systems and applies a number of these models to a performance analysis of the Transmission Control Protocol, a flow control protocol commonly used in the Internet

    The Terror Risk to Current Water Infrastructure Systems

    Get PDF
    Unquestionably, water maintains a critical role within society. It is precisely this role that makes it an attractive target for potential adversaries. As it currently stands, water infrastructures are significantly vulnerable to attacks; their risk however, is questionable. As such, this work will analyze the security of water infrastructure systems. It will discuss the systems involved in the treatment of water and waste water, and how various processes can be vulnerable to four main threats: biological, chemical, cyber and physical threats. Additionally, this work will challenge the conventional view of terrorism through the perspective of Critical Terrorism Studies as a means to discuss how non-traditional threats such as privatization and neoliberalization may also be seen as threats. Moreover, this work will also explore how each of these threats may be realized, and it will furthermore utilize case studies and professional interviews to achieve this. Attacks upon water infrastructure systems are not new. In fact, such attacks have been reported as far back as 500 BCE. What is new, however, is the evolving threat landscape. Given the convenience of the Internet, a single individual can research almost any topic to his or her desire, including vulnerabilities within critical infrastructure systems. To add to this, one does not have to search deep into the web to find information on how to inflict serious damage. Certainly, the twenty-first century has its prospects, but it certainly has its perils as well. This work will attempt to address vulnerabilities, and furthermore, what is at stake if nothing remains to be done

    Software Assurance Best Practices for Air Force Weapon and Information Technology Systems - Are We Bleeding?

    Get PDF
    In the corporate world, bits mean money, and as the Department of Defense (DoD) becomes more and more reliant on net-centric warfare, bits mean national security. Software security threats are very real, as demonstrated by the constant barrage of Internet viruses, worms, Trojans, and hackers seeking to exploit the latest vulnerability. Most organizations focus their resources on reactive defenses such as firewalls, antivirus software, and encryption, however as demonstrated by the numerous attacks that are successful, those post facto measures are not enough to stop the bleeding. The DoD defines software assurance (SwA) as the level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software. SwA focuses on baking in security versus bolting it on afterwards. The Department of Homeland Security and DoD each have had SwA programs for a few years; however the Air Force (AF) just recently formed the Application Software Assurance Center of Excellence at Maxwell AFB-Gunter Annex, AL. This research seeks to identify common issues that present challenges to the development of secure software and best practices that the AF could adopt as it proactively begins to heal the SwA problem

    Modelling tools for cost-effective water management

    Get PDF
    Vlaanderen als regio wordt geconfronteerd met een hele reeks van waterbeheerskwesties zoals een ontoereikende oppervlaktewater- en grondwaterkwaliteit, een toenemend risico op overstromingen, sedimentbeheer en een slechte ecologische kwaliteit. Het identificeren van kosten-effectieve maatregelenprogramma’s die in staat zijn om deze kwesties geheel of gedeeltelijk op te lossen tegen zo laag mogelijke kosten is hierbij een belangrijke stap. De doelstelling van dit onderzoek is modellen en instrumenten te ontwikkelen die beleidsmakers ondersteunen bij het samenstellen van kosten-effectieve maatregelenprogramma’s voor waterbeleid. Belangrijk hierbij is dat de modellen enerzijds geschikt zijn om besluitvorming te ondersteunen op nationale of regionale schaal (macro-schaal), maar anderzijds ook gedetailleerd genoeg zijn om inzichten te geven op het lokale project-niveau (micro-schaal). Toepassingen die o.a. aan bod komen zijn een kosten-effectiviteitsanalyse voor oppervlaktewaterkwaliteit, een kosten-batenanalyse voor risico-gebaseerd overstromingsbeheer en hoe door de waardering van ecosysteemdiensten win-win situaties kunnen geïdentificeerd worden voor diverse wateraspecten gelijktijdig

    Urban and Rural-residential Land Uses: Their Role in Watershed Health and the Rehabilitation of Oregon’s Wild Salmonids

    Get PDF
    This technical report by the Independent Multidisciplinary Science Team (IMST) is a comprehensive review of how human activities in urban and rural-residential areas can alter aquatic ecosystems and resulting implications for salmonid recovery, with a geographic focus on the state of Oregon. The following topics are considered in the form of science questions, and comprise the major components of this report: The effects of urban and rural-residential development on Oregon’s watersheds and native wild salmonids. Actions that can be used to avoid or mitigate undesirable changes to aquatic ecosystems near developing urban and rural-residential areas. The benefits and pitfalls of salmonid habitat rehabilitation within established urban or rural-residential areas. Suggested research and monitoring focus areas that will facilitate the recovery of salmonid populations affected by development. The fundamental concepts presented in this report should be applicable to most native salmonid populations across the state. IMST encourages managers and policy-makers with interest in a specific species or geographic region to carefully research local ecological conditions, as well as specific life history characteristics of salmonids in the region. Conserving watershed condition and salmonids in the face of increasing development requires consideration of two distinct sets of processes. First are the human social and economic processes that drive patterns in land use change. Second are the ecological processes, altered by land use, that underlie salmonid habitat changes. This report focuses on the latter and summarizes the effects of rural-residential and urban development on native, wild salmonid populations and the watersheds upon which they depend
    • …
    corecore