4,963 research outputs found

    The STRESS Method for Boundary-point Performance Analysis of End-to-end Multicast Timer-Suppression Mechanisms

    Full text link
    Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such approach may be useful for average-case analysis but does not cover boundary-point (worst or best-case) scenarios. To synthesize boundary-point scenarios a more systematic approach is needed.In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. The algorithms used in our method utilize implicit backward search using branch and bound techniques and start from given target events. This aims to reduce the search complexity drastically. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average-case analyses. We hope for our method to serve as a model for applying systematic scenario generation to other multicast protocols.Comment: 24 pages, 10 figures, IEEE/ACM Transactions on Networking (ToN) [To appear

    The cost of conservative synchronization in parallel discrete event simulations

    Get PDF
    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor

    On Consistency and Network Latency in Distributed Interactive Applications: A Survey—Part I

    Get PDF
    This paper is the first part of a two-part paper that documents a detailed survey of the research carried out on consistency and latency in distributed interactive applications (DIAs) in recent decades. Part I reviews the terminology associated with DIAs and offers definitions for consistency and latency. Related issues such as jitter and fidelity are also discussed. Furthermore, the various consistency maintenance mechanisms that researchers have used to improve consistency and reduce latency effects are considered. These mechanisms are grouped into one of three categories, namely time management, Information management and system architectural management. This paper presents the techniques associated with the time management category. Examples of such mechanisms include time warp, lock step synchronisation and predictive time management. The remaining two categories are presented in part two of the survey

    A Systematic Approach to Constructing Families of Incremental Topology Control Algorithms Using Graph Transformation

    Full text link
    In the communication systems domain, constructing and maintaining network topologies via topology control (TC) algorithms is an important cross-cutting research area. Network topologies are usually modeled using attributed graphs whose nodes and edges represent the network nodes and their interconnecting links. A key requirement of TC algorithms is to fulfill certain consistency and optimization properties to ensure a high quality of service. Still, few attempts have been made to constructively integrate these properties into the development process of TC algorithms. Furthermore, even though many TC algorithms share substantial parts (such as structural patterns or tie-breaking strategies), few works constructively leverage these commonalities and differences of TC algorithms systematically. In previous work, we addressed the constructive integration of consistency properties into the development process. We outlined a constructive, model-driven methodology for designing individual TC algorithms. Valid and high-quality topologies are characterized using declarative graph constraints; TC algorithms are specified using programmed graph transformation. We applied a well-known static analysis technique to refine a given TC algorithm in a way that the resulting algorithm preserves the specified graph constraints. In this paper, we extend our constructive methodology by generalizing it to support the specification of families of TC algorithms. To show the feasibility of our approach, we reneging six existing TC algorithms and develop e-kTC, a novel energy-efficient variant of the TC algorithm kTC. Finally, we evaluate a subset of the specified TC algorithms using a new tool integration of the graph transformation tool eMoflon and the Simonstrator network simulation framework.Comment: Corresponds to the accepted manuscrip

    Geopolitika numeričkog prostora i vladavina algoritama

    Get PDF
    The numerical media can simulate all the details of other media by accumulating all the previous classical media functions (television, typewriter, etc.) and acting in this direction they captured so far unprecedented spaces of representation and expression. Due to such capacity for digital programming through modular structures of all the previous functions of the classical mass media, the numerical media succeed through the network reconfiguration and cultural transcoding in presenting a retrospective picture of the world and culture in the history of mankind. Inter-connectivity between the numerical media and internet networks implies a planetary virtual network that some compare with “the world’s collective cortex”. However, given the increasing density and complexity, the numerical media have become more hermetical and more complex in their deep functioning. The gradual autonomy and emancipation of its creators and operators opens the process of creating a mysterious artificial intelligence as an introduction to the new reign of algorithms. It is an introduction to the new virtual geopolitics of cyberspace where the strategies of conquest and the monopoly over information become the rival space of power between official government actors and other asymmetric actors.Numerički mediji mogu simulirati sve detalje drugih medija, kumulirajući sve prethodne klasične medijske funkcije ( televizija, pisaći stroj, itd..), i tom smjeru osvajaju do sada nedostižive prostore reprezentacije i izražaja. Takav kapacitet numeričkih medija za digitalno programiranje kroz modularnih struktura, svih prethodnih funkcija klasičinih mas-medija uspijevaju putem mrežne rekonfiguracije i kulturalnog transkodiranja, predočiti retrospektivnu sliku svijeta i kulture u povijesti čovječanstva. Inter-konektivnost između numeričkih medija i internetskih mreža predstavlja planetarnu virtualnu mrežu koji neki uspoređuju sa “svjetskim kolektivnim korteksom”. Međutim, s obzirom na rastuću gustoću i kompleksnost, numerički mediji postaju sve hermetičniji i složeniji u njihovom dubokom funkcioniranju. Postupna autonomizacija i emancipacija od svojih kreatora i operatora, otvara proces u kojem se nastaje zagonetna umjetna inteligencija kao uvod u novu vladavinu algoritama. Riječ je o uvodu u novu virtualnu geopolitiku cyber-prostora u kojem su strategije osvajanja i monopola nad informacijama postali suparnički prostor igre moći između službenih državnih aktera i drugih asimetričnih aktera

    Robust Architectures for Embedded Wireless Network Control and Actuation

    Get PDF
    Networked Cyber-Physical Systems are fundamentally constrained by the tight coupling and closed-loop control of physical processes. To address actuation in such closed-loop wireless control systems there is a strong need to re-think the communication architectures and protocols for reliability, coordination and control. We introduce the Embedded Virtual Machine (EVM), a programming abstraction where controller tasks with their control and timing properties are maintained across physical node boundaries and functionality is capable of migrating to the most competent set of physical controllers. In the context of process and discrete control, an EVM is the distributed runtime system that dynamically selects primary-backup sets of controllers given spatial and temporal constraints of the underlying wireless network. EVM-based algorithms allow network control algorithms to operate seamlessly over less reliable wireless networks with topological changes. They introduce new capabilities such as predictable outcomes during sensor/actuator failure, adaptation to mode changes and runtime optimization of resource consumption. An automated design flow from Simulink to platform-independent domain specific languages, and subsequently, to platform-dependent code generation is presented. Through case studies in discrete and process control we demonstrate the capabilities of EVM-based wireless network control systems
    corecore