343 research outputs found

    Misbehaving TCP Receivers Can Cause Internet-Wide Congestion Collapse

    Get PDF
    An "optimistic" acknowledgment (OptAck) is an acknowledgment sent by a misbehaving client for a data segment that it has not received. Whereas previous work has focused on OptAck as a means to greedily improve end-to-end performance, we study OptAck exclusively as a denial of service attack. Specifically, an attacker sends optimistic acknowledgments to many victims in parallel, thereby amplifying its effective bandwidth by a factor of 30 million (worst case). Thus, even a relatively modest attacker can totally saturate the paths from many victims back to the attacker. Worse, a distributed network of compromised machines (``zombies'') can exploit this attack in parallel to bring about wide-spread, sustained congestion collapse. We implement this attack both in simulation and in a wide-area network, and show it severity both in terms of number of packets and total traffic generated. We engineer and implement a novel solution that does not require client or network modifications allowing for practical deployment. Additionally, we demonstrate the solution's efficiency on a real network

    A functional description of the advanced receiver

    Get PDF
    The breadboard Advanced Receiver 2 (ARX 2) that is currently being built for future use in NASA's Deep Space Network (DSN) is described. The hybrid analog/digital receiver performs multiple functions including carrier, subcarrier, and symbol synchronization. Tracking can be achieved for residual, suppressed, or hybrid carriers and for both sinusoidal and square-wave subcarriers. Other functions such as time-tagged Doppler extraction and monitor/control are also discussed, including acquisition algorithms and lock-detection schemes. System requirements are specified and a functional description of the ARX 2 is presented. The various digital signal-processing algorithms used are also discussed and illustrated with block diagrams

    Paxos based directory updates for geo-replicated cloud storage

    Get PDF
    Modern cloud data stores (e.g., Spanner, Cassandra) replicate data across geographically distributed data centers for availability, redundancy and optimized latencies.^ An important class of cloud data stores involves the use of directories to track the location of individual data objects. Directory-based datastores allow flexible data placement, and the ability to adapt placement in response to changing workload dynamics. However, a key challenge is maintaining and updating the directory state when replica placement changes.^ In this thesis, we present the design and implementation of a system to address the problem of correctly updating these directories. Our system is built around JPaxos, an open-sourced implementation of the Paxos consensus protocol. Using a Paxos cluster ensures our system is tolerant to failures that may occur during the update process compared to approaches that involve a single centralized coordinator.^ We instrument and evaluate our implementation on PRObE, a large scale research testbed, using DummyNet to emulate wide-area network latencies. Our results show that latencies of directory update with our system are acceptable in WAN environments.^ Our contributions include (i) the design, implementation and evaluation of a system for updating directories of geo-replicated cloud datastores; (ii) implementation experience with JPaxos; and (iii) experience with the PRObE testbed

    Stream Processing Systems Benchmark: StreamBench

    Get PDF
    Batch processing technologies (Such as MapReduce, Hive, Pig) have matured and been widely used in the industry. These systems solved the issue processing big volumes of data successfully. However, first big amount of data need to be collected and stored in a database or file system. That is very time-consuming. Then it takes time to finish batch processing analysis jobs before get any results. While there are many cases that need analysed results from unbounded sequence of data in seconds or sub-seconds. To satisfy the increasing demand of processing such streaming data, several streaming processing systems are implemented and widely adopted, such as Apache Storm, Apache Spark, IBM InfoSphere Streams, and Apache Flink. They all support online stream processing, high scalability, and tasks monitoring. While how to evaluate stream processing systems before choosing one in production development is an open question. In this thesis, we introduce StreamBench, a benchmark framework to facilitate performance comparisons of stream processing systems. A common API component and a core set of workloads are defined. We implement the common API and run benchmarks for three widely used open source stream processing systems: Apache Storm, Flink, and Spark Streaming. A key feature of the StreamBench framework is that it is extensible -- it supports easy definition of new workloads, in addition to making it easy to benchmark new stream processing systems

    Scalable and adaptable tracking of humans in multiple camera systems

    Get PDF
    The aim of this thesis is to track objects on a network of cameras both within [intra) and across (inter) cameras. The algorithms must be adaptable to change and are learnt in a scalable approach. Uncalibrated cameras are used that are patially separated, and therefore tracking must be able to cope with object oclusions, illuminations changes, and gaps between cameras.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore