1,704 research outputs found

    Dimming the Internet: Detecting Throttling as a Mechanism of Censorship in Iran

    Full text link
    In the days immediately following the contested June 2009 Presidential election, Iranians attempting to reach news content and social media platforms were subject to unprecedented levels of the degradation, blocking and jamming of communications channels. Rather than shut down networks, which would draw attention and controversy, the government was rumored to have slowed connection speeds to rates that would render the Internet nearly unusable, especially for the consumption and distribution of multimedia content. Since, political upheavals elsewhere have been associated with headlines such as "High usage slows down Internet in Bahrain" and "Syrian Internet slows during Friday protests once again," with further rumors linking poor connectivity with political instability in Myanmar and Tibet. For governments threatened by public expression, the throttling of Internet connectivity appears to be an increasingly preferred and less detectable method of stifling the free flow of information. In order to assess this perceived trend and begin to create systems of accountability and transparency on such practices, we attempt to outline an initial strategy for utilizing a ubiquitious set of network measurements as a monitoring service, then apply such methodology to shed light on the recent history of censorship in Iran.Comment: Working Draf

    Low Latency Datacenter Networking: A Short Survey

    Full text link
    Datacenters are the cornerstone of the big data infrastructure supporting numerous online services. The demand for interactivity, which significantly impacts user experience and provider revenue, is translated into stringent timing requirements for flows in datacenter networks. Thus low latency networking is becoming a major concern of both industry and academia. We provide a short survey of recent progress made by the networking community for low latency datacenter networks. We propose a taxonomy to categorize existing work based on four main techniques, reducing queue length, accelerating retransmissions, prioritizing mice flows, and exploiting multi-path. Then we review select papers, highlight the principal ideas, and discuss their pros and cons. We also present our perspectives of the research challenges and opportunities, hoping to aspire more future work in this space.Comment: 6 page

    Achieving both High Energy Efficiency and High Performance in On-Chip Communication using Hierarchical Rings with Deflection Routing

    Full text link
    Hierarchical ring networks, which hierarchically connect multiple levels of rings, have been proposed in the past to improve the scalability of ring interconnects, but past hierarchical ring designs sacrifice some of the key benefits of rings by introducing more complex in-ring buffering and buffered flow control. Our goal in this paper is to design a new hierarchical ring interconnect that can maintain most of the simplicity of traditional ring designs (no in-ring buffering or buffered flow control) while achieving high scalability as more complex buffered hierarchical ring designs. Our design, called HiRD (Hierarchical Rings with Deflection), includes features that allow us to mostly maintain the simplicity of traditional simple ring topologies while providing higher energy efficiency and scalability. First, HiRD does not have any buffering or buffered flow control within individual rings, and requires only a small amount of buffering between the ring hierarchy levels. When inter-ring buffers are full, our design simply deflects flits so that they circle the ring and try again, which eliminates the need for in-ring buffering. Second, we introduce two simple mechanisms that provides an end-to-end delivery guarantee within the entire network without impacting the critical path or latency of the vast majority of network traffic. HiRD attains equal or better performance at better energy efficiency than multiple versions of both a previous hierarchical ring design and a traditional single ring design. We also analyze our design's characteristics and injection and delivery guarantees. We conclude that HiRD can be a compelling design point that allows higher energy efficiency and scalability while retaining the simplicity and appeal of conventional ring-based designs

    An ISP Level Solution to Combat DDoS Attacks using Combined Statistical Based Approach

    Full text link
    Disruption from service caused by DDoS attacks is an immense threat to Internet today. These attacks can disrupt the availability of Internet services completely, by eating either computational or communication resources through sheer volume of packets sent from distributed locations in a coordinated manner or graceful degradation of network performance by sending attack traffic at low rate. In this paper, we describe a novel framework that deals with the detection of variety of DDoS attacks by monitoring propagation of abrupt traffic changes inside ISP Domain and then characterizes flows that carry attack traffic. Two statistical metrics namely, Volume and Flow are used as parameters to detect DDoS attacks. Effectiveness of an anomaly based detection and characterization system highly depends on accuracy of threshold value settings. Inaccurate threshold values cause a large number of false positives and negatives. Therefore, in our scheme, Six-Sigma and varying tolerance factor methods are used to identify threshold values accurately and dynamically for various statistical metrics. NS-2 network simulator on Linux platform is used as simulation testbed to validate effectiveness of proposed approach. Different attack scenarios are implemented by varying total number of zombie machines and at different attack strengths. The comparison with volume-based approach clearly indicates the supremacy of our proposed system

    Apache Spark Streaming, Kafka and HarmonicIO: A Performance Benchmark and Architecture Comparison for Enterprise and Scientific Computing

    Full text link
    This paper presents a benchmark of stream processing throughput comparing Apache Spark Streaming (under file-, TCP socket- and Kafka-based stream integration), with a prototype P2P stream processing framework, HarmonicIO. Maximum throughput for a spectrum of stream processing loads are measured, specifically, those with large message sizes (up to 10MB), and heavy CPU loads -- more typical of scientific computing use cases (such as microscopy), than enterprise contexts. A detailed exploration of the performance characteristics with these streaming sources, under varying loads, reveals an interplay of performance trade-offs, uncovering the boundaries of good performance for each framework and streaming source integration. We compare with theoretic bounds in each case. Based on these results, we suggest which frameworks and streaming sources are likely to offer good performance for a given load. Broadly, the advantages of Spark's rich feature set comes at a cost of sensitivity to message size in particular -- common stream source integrations can perform poorly in the 1MB-10MB range. The simplicity of HarmonicIO offers more robust performance in this region, especially for raw CPU utilization

    The persistent congestion problem of FAST-TCP: analysis and solutions

    Full text link
    FAST-TCP achieves better performance than traditional TCP-Reno schemes, but unfortunately it is inherently unfair to older connections due to wrong estimations of the round-trip propagation delay. This paper presents a model for this anomalous behavior of FAST flows, known as the persistent congestion problem. We first develop an elementary analysis for a scenario with just two flows, and then build up the general case with an arbitrary number of flows. The model correctly quantifies how much unfairness shows up among the different connections, confirming experimental observations made by several previous studies. We built on this model to develop an algorithm to obtain a good estimate of the propagation delay for FAST-TCP that enables to achieve fairness between aged and new connections while preserving the high throughput and low buffer occupancy of the original protocol. Furthermore, our proposal only requires a modification of the sender host, avoiding the need to upgrade the intermediate routers in any way

    IOArbiter: Dynamic Provisioning of Backend Block Storage in the Cloud

    Full text link
    With the advent of virtualization technology, cloud computing realizes on-demand computing. The capability of dynamic resource provisioning is a fundamental driving factor for users to adopt the cloud technology. The aspect is important for cloud service providers to optimize the expense for running the infrastructure as well. Despite many technological advances in related areas, however, it is still the case that the infrastructure providers must decide hardware configuration before deploying a cloud infrastructure, especially from the storage's perspective. This static nature of the storage provisioning practice can cause many problems in meeting tenant requirements, which often come later into the picture. In this paper, we propose a system called IOArbiter that enables the dynamic creation of underlying storage implementation in the cloud. IOArbiter defers storage provisioning to the time at which a tenant actually requests a storage space. As a result, an underlying storage implementation, e.g., RAID-5, 6 or Ceph storage pool with 6+3 erasure coding, will be materialized at the volume creation time. Using our prototype implementation with Openstack Cinder, we show that IOArbiter can simultaneously satisfy a number of different tenant demands, which may not be possible with a static configuration strategy. Additionally QoS mechanisms such as admission control and dynamic throttling help the system mitigate a noisy neighbor problem significantly.Comment: 7 pages, 3 figure

    IOTune: A G-states Driver for Elastic Performance of Block Storage

    Full text link
    Imagining a disk which provides baseline performance at a relatively low price during low-load periods, but when workloads demand more resources, the disk performance is automatically promoted in situ and in real time. In a hardware era, this is hardly achievable. However, this imagined disk is becoming reality due to the technical advances of software-defined storage, which enable volume performance to be adjusted on the fly. We propose IOTune, a resource management middleware which employs software-defined storage primitives to implement G-states of virtual block devices. G-states enable virtual block devices to serve at multiple performance gears, getting rid of conflicts between immutable resource reservation and dynamic resource demands, and always achieving resource right-provisioning for workloads. Accompanying G-states, we also propose a new block storage pricing policy for cloud providers. Our case study for applying G-states to cloud block storage verifies the effectiveness of the IOTune framework. Trace-replay based evaluations demonstrate that storage volumes with G-states adapt to workload fluctuations. For tenants, G-states enable volumes to provide much better QoS with a same cost of ownership, comparing with static IOPS provisioning and the I/O credit mechanism. G-states also reduce I/O tail latencies by one to two orders of magnitude. From the standpoint of cloud providers, G-states promote storage utilization, creating values and benefiting competitiveness. G-states supported by IOTune provide a new paradigm for storage resource management and pricing in multi-tenant clouds.Comment: 15 pages, 10 figure

    Memory DoS Attacks in Multi-tenant Clouds: Severity and Mitigation

    Full text link
    In cloud computing, network Denial of Service (DoS) attacks are well studied and defenses have been implemented, but severe DoS attacks on a victim's working memory by a single hostile VM are not well understood. Memory DoS attacks are Denial of Service (or Degradation of Service) attacks caused by contention for hardware memory resources on a cloud server. Despite the strong memory isolation techniques for virtual machines (VMs) enforced by the software virtualization layer in cloud servers, the underlying hardware memory layers are still shared by the VMs and can be exploited by a clever attacker in a hostile VM co-located on the same server as the victim VM, denying the victim the working memory he needs. We first show quantitatively the severity of contention on different memory resources. We then show that a malicious cloud customer can mount low-cost attacks to cause severe performance degradation for a Hadoop distributed application, and 38X delay in response time for an E-commerce website in the Amazon EC2 cloud. Then, we design an effective, new defense against these memory DoS attacks, using a statistical metric to detect their existence and execution throttling to mitigate the attack damage. We achieve this by a novel re-purposing of existing hardware performance counters and duty cycle modulation for security, rather than for improving performance or power consumption. We implement a full prototype on the OpenStack cloud system. Our evaluations show that this defense system can effectively defeat memory DoS attacks with negligible performance overhead.Comment: 18 page

    Umbrella: Enabling ISPs to Offer Readily Deployable and Privacy-Preserving DDoS Prevention Services

    Full text link
    Defending against distributed denial of service (DDoS) attacks in the Internet is a fundamental problem. However, recent industrial interviews with over 100 security experts from more than ten industry segments indicate that DDoS problems have not been fully addressed. The reasons are twofold. On one hand, many academic proposals that are provably secure witness little real-world deployment. On the other hand, the operation model for existing DDoS-prevention service providers (e.g., Cloudflare, Akamai) is privacy invasive for large organizations (e.g., government). In this paper, we present Umbrella, a new DDoS defense mechanism enabling Internet Service Providers (ISPs) to offer readily deployable and privacy-preserving DDoS prevention services to their customers. At its core, Umbrella develops a multi-layered defense architecture to defend against a wide spectrum of DDoS attacks. In particular, the flood throttling layer stops amplification-based DDoS attacks; the congestion resolving layer, aiming to prevent sophisticated attacks that cannot be easily filtered, enforces congestion accountability to ensure that legitimate flows are guaranteed to receive their fair shares regardless of attackers' strategies; and finally the userspecific layer allows DDoS victims to enforce self-desired traffic control policies that best satisfy their business requirements. Based on Linux implementation, we demonstrate that Umbrella is capable to deal with large scale attacks involving millions of attack flows, meanwhile imposing negligible packet processing overhead. Further, our physical testbed experiments and large scale simulations prove that Umbrella is effective to mitigate various DDoS attacks
    • …
    corecore