10,864 research outputs found

    Robust control tools for traffic monitoring in TCP/AQM networks

    Full text link
    Several studies have considered control theory tools for traffic control in communication networks, as for example the congestion control issue in IP (Internet Protocol) routers. In this paper, we propose to design a linear observer for time-delay systems to address the traffic monitoring issue in TCP/AQM (Transmission Control Protocol/Active Queue Management) networks. Due to several propagation delays and the queueing delay, the set TCP/AQM is modeled as a multiple delayed system of a particular form. Hence, appropriate robust control tools as quadratic separation are adopted to construct a delay dependent observer for TCP flows estimation. Note that, the developed mechanism enables also the anomaly detection issue for a class of DoS (Denial of Service) attacks. At last, simulations via the network simulator NS-2 and an emulation experiment validate the proposed methodology

    Flooding attacks to internet threat monitors (ITM): Modeling and counter measures using botnet and honeypots

    Full text link
    The Internet Threat Monitoring (ITM),is a globally scoped Internet monitoring system whose goal is to measure, detect, characterize, and track threats such as distribute denial of service(DDoS) attacks and worms. To block the monitoring system in the internet the attackers are targeted the ITM system. In this paper we address flooding attack against ITM system in which the attacker attempt to exhaust the network and ITM's resources, such as network bandwidth, computing power, or operating system data structures by sending the malicious traffic. We propose an information-theoretic frame work that models the flooding attacks using Botnet on ITM. Based on this model we generalize the flooding attacks and propose an effective attack detection using Honeypots

    The Motivation, Architecture and Demonstration of Ultralight Network Testbed

    Get PDF
    In this paper we describe progress in the NSF-funded Ultralight project and a recent demonstration of Ultralight technologies at SuperComputing 2005 (SC|05). The goal of the Ultralight project is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Ultralight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. Thus we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we present the motivation for, and an overview of, the Ultralight project. We then cover early results in the various working areas of the project. The remainder of the paper describes our experiences of the Ultralight network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many sites interconnected by the Ultralight backbone network. The exercise highlighted the benefits of Ultralight's research and development efforts that are enabling new and advanced methods of distributed scientific data analysis

    The Design and Demonstration of the Ultralight Testbed

    Get PDF
    In this paper we present the motivation, the design, and a recent demonstration of the UltraLight testbed at SC|05. The goal of the Ultralight testbed is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network- focused approach. UltraLight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. To achieve its goal we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we will first present early results in the various working areas of the project. We then describe our experiences of the network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many Grid computing sites

    The Ultralight project: the network as an integrated and managed resource for data-intensive science

    Get PDF
    Looks at the UltraLight project which treats the network interconnecting globally distributed data sets as a dynamic, configurable, and closely monitored resource to construct a next-generation system that can meet the high-energy physics community's data-processing, distribution, access, and analysis needs

    Estimating packet loss rate in the access through application-level measurements

    Get PDF
    End user monitoring of quality of experience is one of the necessary steps to achieve an effective and winning control over network neutrality. The involvement of the end user, however, requires the development of light and user-friendly tools that can be easily run at the application level with limited effort and network resources usage. In this paper, we propose a simple model to estimate packet loss rate perceived by a connection, by round trip time and TCP goodput samples collected at the application level. The model is derived from the well-known Mathis equation, which predicts the bandwidth of a steady-state TCP connection under random losses and delayed ACKs and it is evaluated in a testbed environment under a wide range of different conditions. Experiments are also run on real access networks. We plan to use the model to analyze the results collected by the "network neutrality bot" (Neubot), a research tool that performs application-level network-performance measurements. However, the methodology is easily portable and can be interesting for basically any user application that performs large downloads or uploads and requires to estimate access network quality and its variation

    Internet Censorship: An Integrative Review of Technologies Employed to Limit Access to the Internet, Monitor User Actions, and their Effects on Culture

    Get PDF
    The following conducts an integrative review of the current state of Internet Censorship in China, Iran, and Russia, highlights common circumvention technologies (CTs), and analyzes the effects Internet Censorship has on cultures. The author spends a large majority of the paper delineating China’s Internet infrastructure and prevalent Internet Censorship Technologies/Techniques (ICTs), paying particular attention to how the ICTs function at a technical level. The author further analyzes the state of Internet Censorship in both Iran and Russia from a broader perspective to give a better understanding of Internet Censorship around the globe. The author also highlights specific CTs, explaining how they function at a technical level. Findings indicate that among all three nation-states, state control of Internet Service Providers is the backbone of Internet Censorship. Specifically, within China, it is discovered that the infrastructure functions as an Intranet, thereby creating a closed system. Further, BGP Hijacking, DNS Poisoning, and TCP RST attacks are analyzed to understand their use-case within China. It is found that Iran functions much like a weaker version of China in regards to ICTs, with the state seemingly using the ICT of Bandwidth Throttling rather consistently. Russia’s approach to Internet censorship, in stark contrast to Iran and China, is found to rely mostly on the legislative system and fear to implement censorship, though their technical level of ICT implementation grows daily. TOR, VPNs, and Proxy Servers are all analyzed and found to be robust CTs. Drawing primarily from the examples given throughout the paper, the author highlights the various effects of Internet Censorship on culture – noting that at its core, Internet Censorship destroys democracy
    • 

    corecore