6,188 research outputs found

    Incorporating TSN/BLS in AFDX for Mixed-Criticality Avionics Applications: Specification and Analysis

    Full text link
    In this paper, we propose an extension of the AFDX standard, incorporating a TSN/BLS shaper, to homogenize the avionics communication architecture, and enable the interconnection of different avionics domains with mixed-criticality levels, e.g., legacy AFDX traffic, Flight Control and In-Flight Entertainment. First, we present the main specifications of such a proposed solution. Then, we detail the corresponding worst-case timing analysis, using the Network Calculus framework, to infer real-time guarantees. Finally, we conduct the performance analysis of such a proposal on a realistic AFDX configuration. Results show the efficiency of the Extended AFDX standard to noticeably enhance the medium priority level delay bounds, while respecting the higher priority level constraints, in comparison with the legacy AFDX standard

    From Local to Global Stability in Stochastic Processing Networks through Quadratic Lyapunov Functions

    Full text link
    We construct a generic, simple, and efficient scheduling policy for stochastic processing networks, and provide a general framework to establish its stability. Our policy is randomized and prioritized: with high probability it prioritizes jobs which have been least routed through the network. We show that the network is globally stable under this policy if there exists an appropriate quadratic local Lyapunov function that provides a negative drift with respect to nominal loads at servers. Applying this generic framework, we obtain stability results for our policy in many important examples of stochastic processing networks: open multiclass queueing networks, parallel server networks, networks of input-queued switches, and a variety of wireless network models with interference constraints. Our main novelty is the construction of an appropriate global Lyapunov function from quadratic local Lyapunov functions, which we believe to be of broader interest.Comment: 39 pages, 4 figure

    Smart Jammer and LTE Network Strategies in An Infinite-Horizon Zero-Sum Repeated Game with Asymmetric and Incomplete Information

    Full text link
    LTE/LTE-Advanced networks are known to be vulnerable to denial-of-service and loss-of-service attacks from smart jammers. In this article, the interaction between a smart jammer and LTE network is modeled as an infinite-horizon, zero-sum, asymmetric repeated game. The smart jammer and eNode B are modeled as the informed and the uninformed player, respectively. The main purpose of this article is to construct efficient suboptimal strategies for both players that can be used to solve the above-mentioned infinite-horizon repeated game with asymmetric and incomplete information. It has been shown in game-theoretic literature that security strategies provide optimal solution in zero-sum games. It is also shown that both players' security strategies in an infinite-horizon asymmetric game depend only on the history of the informed player's actions. However, fixed-sized sufficient statistics are needed for both players to solve the above-mentioned game efficiently. The smart jammer uses its evolving belief state as the fixed-sized sufficient statistics for the repeated game. Whereas, the LTE network (uninformed player) uses worst-case regret of its security strategy and its anti-discounted update as the fixed-sized sufficient statistics. Although fixed-sized sufficient statistics are employed by both players, optimal security strategy computation in {\lambda}-discounted asymmetric games is still hard to perform because of non-convexity. Hence, the problem is convexified in this article by devising `approximated' security strategies for both players that are based on approximated optimal game value. However, `approximated' strategies require full monitoring. Therefore, a simplistic yet effective `expected' strategy is also constructed for the LTE network that does not require full monitoring. The simulation results show that the smart jammer plays non-revealing and misleading strategies

    FATAL+: A Self-Stabilizing Byzantine Fault-tolerant Clocking Scheme for SoCs

    Full text link
    We present concept and implementation of a self-stabilizing Byzantine fault-tolerant distributed clock generation scheme for multi-synchronous GALS architectures in critical applications. It combines a variant of a recently introduced self-stabilizing algorithm for generating low-frequency, low-accuracy synchronized pulses with a simple non-stabilizing high-frequency, high-accuracy clock synchronization algorithm. We provide thorough correctness proofs and a performance analysis, which use methods from fault-tolerant distributed computing research but also addresses hardware-related issues like metastability. The algorithm, which consists of several concurrent communicating asynchronous state machines, has been implemented in VHDL using Petrify in conjunction with some extensions, and synthetisized for an Altera Cyclone FPGA. An experimental validation of this prototype has been carried out to confirm the skew and clock frequency bounds predicted by the theoretical analysis, as well as the very short stabilization times (required for recovering after excessively many transient failures) achievable in practice.Comment: arXiv admin note: significant text overlap with arXiv:1105.478

    Performance Guarantees of Distributed Algorithms for QoS in Wireless Ad Hoc Networks

    Full text link
    Consider a wireless network where each communication link has a minimum bandwidth quality-of-service requirement. Certain pairs of wireless links interfere with each other due to being in the same vicinity, and this interference is modeled by a conflict graph. Given the conflict graph and link bandwidth requirements, the objective is to determine, using only localized information, whether the demands of all the links can be satisfied. At one extreme, each node knows the demands of only its neighbors; at the other extreme, there exists an optimal, centralized scheduler that has global information. The present work interpolates between these two extremes by quantifying the tradeoff between the degree of decentralization and the performance of the distributed algorithm. This open problem is resolved for the primary interference model, and the following general result is obtained: if each node knows the demands of all links in a ball of radius dd centered at the node, then there is a distributed algorithm whose performance is away from that of an optimal, centralized algorithm by a factor of at most (2d+3)/(2d+2)(2d+3)/(2d+2). The tradeoff between performance and complexity of the distributed algorithm is also analyzed. It is shown that for line networks under the protocol interference model, the row constraints are a factor of at most 33 away from optimal. Both bounds are best possible

    SDN Flow Entry Management Using Reinforcement Learning

    Full text link
    Modern information technology services largely depend on cloud infrastructures to provide their services. These cloud infrastructures are built on top of datacenter networks (DCNs) constructed with high-speed links, fast switching gear, and redundancy to offer better flexibility and resiliency. In this environment, network traffic includes long-lived (elephant) and short-lived (mice) flows with partitioned and aggregated traffic patterns. Although SDN-based approaches can efficiently allocate networking resources for such flows, the overhead due to network reconfiguration can be significant. With limited capacity of Ternary Content-Addressable Memory (TCAM) deployed in an OpenFlow enabled switch, it is crucial to determine which forwarding rules should remain in the flow table, and which rules should be processed by the SDN controller in case of a table-miss on the SDN switch. This is needed in order to obtain the flow entries that satisfy the goal of reducing the long-term control plane overhead introduced between the controller and the switches. To achieve this goal, we propose a machine learning technique that utilizes two variations of reinforcement learning (RL) algorithms-the first of which is traditional reinforcement learning algorithm based while the other is deep reinforcement learning based. Emulation results using the RL algorithm show around 60% improvement in reducing the long-term control plane overhead, and around 14% improvement in the table-hit ratio compared to the Multiple Bloom Filters (MBF) method given a fixed size flow table of 4KB.Comment: 19 pages, 11 figures, published on ACM Transactions on Autonomous and Adaptive Systems (TAAS) 201

    Information and Memory in Dynamic Resource Allocation

    Full text link
    We propose a general framework, dubbed Stochastic Processing under Imperfect Information (SPII), to study the impact of information constraints and memories on dynamic resource allocation. The framework involves a Stochastic Processing Network (SPN) scheduling problem in which the scheduler may access the system state only through a noisy channel, and resource allocation decisions must be carried out through the interaction between an encoding policy (who observes the state) and allocation policy (who chooses the allocation). Applications in the management of large-scale data centers and human-in-the-loop service systems are among our chief motivations. We quantify the degree to which information constraints reduce the size of the capacity region in general SPNs, and how such reduction depends on the amount of memories available to the encoding and allocation policies. Using a novel metric, capacity factor, our main theorem characterizes the reduction in capacity region (under "optimal" policies) for all non-degenerate channels, and across almost all combinations of memory sizes. Notably, the theorem demonstrates, in substantial generality, that (1) the presence of a noisy channel always reduces capacity, (2) more memory for the allocation policy always improves capacity, and (3) more memory for the encoding policy has little to no effect on capacity. Finally, all of our positive (achievability) results are established through constructive, implementable policies.Comment: 48 pages, 5 figures, 1 tabl

    Links as a Service (LaaS): Feeling Alone in the Shared Cloud

    Full text link
    The most demanding tenants of shared clouds require complete isolation from their neighbors, in order to guarantee that their application performance is not affected by other tenants. Unfortunately, while shared clouds can offer an option whereby tenants obtain dedicated servers, they do not offer any network provisioning service, which would shield these tenants from network interference. In this paper, we introduce Links as a Service, a new abstraction for cloud service that provides physical isolation of network links. Each tenant gets an exclusive set of links forming a virtual fat tree, and is guaranteed to receive the exact same bandwidth and delay as if it were alone in the shared cloud. Under simple assumptions, we derive theoretical conditions for enabling LaaS without capacity over-provisioning in fat-trees. New tenants are only admitted in the network when they can be allocated hosts and links that maintain these conditions. Using experiments on real clusters as well as simulations with real-life tenant sizes, we show that LaaS completely avoids the performance degradation caused by traffic from concurrent tenants on shared links. Compared to mere host isolation, LaaS can improve the application performance by up to 200%, at the cost of a 10% reduction in the cloud utilization.Comment: CCIT Report 888 September 2015, EE Pub No. 1845, Technion, Israe

    Communication Complexity of Byzantine Agreement, Revisited

    Full text link
    As Byzantine Agreement (BA) protocols find application in large-scale decentralized cryptocurrencies, an increasingly important problem is to design BA protocols with improved communication complexity. A few existing works have shown how to achieve subquadratic BA under an {\it adaptive} adversary. Intriguingly, they all make a common relaxation about the adaptivity of the attacker, that is, if an honest node sends a message and then gets corrupted in some round, the adversary {\it cannot erase the message that was already sent} --- henceforth we say that such an adversary cannot perform "after-the-fact removal". By contrast, many (super-)quadratic BA protocols in the literature can tolerate after-the-fact removal. In this paper, we first prove that disallowing after-the-fact removal is necessary for achieving subquadratic-communication BA. Next, we show new subquadratic binary BA constructions (of course, assuming no after-the-fact removal) that achieves near-optimal resilience and expected constant rounds under standard cryptographic assumptions and a public-key infrastructure (PKI) in both synchronous and partially synchronous settings. In comparison, all known subquadratic protocols make additional strong assumptions such as random oracles or the ability of honest nodes to erase secrets from memory, and even with these strong assumptions, no prior work can achieve the above properties. Lastly, we show that some setup assumption is necessary for achieving subquadratic multicast-based BA.Comment: The conference version of this paper appeared in PODC 201

    Reliable Broadcast in Practical Networks: Algorithm and Evaluation

    Full text link
    Reliable broadcast is an important primitive to ensure that a source node can reliably disseminate a message to all the non-faulty nodes in an asynchronous and failure-prone networked system. Byzantine Reliable Broadcast protocols were first proposed by Bracha in 1987, and have been widely used in fault-tolerant systems and protocols. Several recent protocols have improved the round and bit complexity of these algorithms. Motivated by the constraints in practical networks, we revisit the problem. In particular, we use cryptographic hash functions and erasure coding to reduce communication and computation complexity and simplify the protocol design. We also identify the fundamental trade-offs of Byzantine Reliable Broadcast protocols with respect to resilience (number of nodes), local computation, round complexity, and bit complexity. Finally, we also design and implement a general testing framework for similar communication protocols. We evaluate our protocols using our framework. The results demonstrate that our protocols have superior performance in practical networks
    • …
    corecore