23 research outputs found
Recommended from our members
Method and system for detecting common attributes of network upgrades
A system and method identify a set of rules for determining a commonality of attributes across different behavior changes for a network. The system performs the method by receiving a set of data correlating network triggers to performance changes of one or more network devices. The set of data further includes an indication of a sign of the performance change for each of the network devices based on the triggers. The method further includes extracting a set of rules relating to a set of relationships between the triggers and the performance changes. The rules identify a commonality of the performance changes for multiple network devices based on the triggers.Board of Regents, University of Texas Syste
Recommended from our members
Method and apparatus for managing quality of service
A system that incorporates teachings of the present disclosure may include, for example, obtaining regression coefficients that quantify a relationship between premises feedback and first network and premises performance indicators, obtaining second network performance indicators for the network elements, obtaining second premises performance indicators for the customer premises equipment, and predicting customer complaints by applying the obtained regression coefficients to at least the second network performance indicators and the second premises performance indicators. Other embodiments are disclosed.Board of Regents, University of Texas Syste
Recommended from our members
Performance diagnosis in large operational networks
textIP networks have become the unified platform that supports a rice and extremely diverse set of applications and services, including traditional IP data service, Voice over IP (VoIP), smart mobile devices (e.g., iPhone), Internet television (IPTV) and online gaming. Network performance and reliability are critical issues in today's operational networks because many applications place increasingly stringent reliability and performance requirements. Even the smallest network performance degradation could cause significant customer distress. In addition, new network and service features (e.g., MPLS fast re-route capabilities) are continually rolled out across the network to support new applications, improve network performance, and reduce the operational cost. Network operators are challenged with ensuring that network reliability and performance is improved over time even in the face of constant changes, network and service upgrades and recurring faulty behaviors. It is critical to detect, troubleshoot and repair performance degradations in a timely and accurate fashion. This is extremely challenging in large IP networks due to their massive scale, complicated topology, high protocol complexity, and continuously evolving nature through either software or hardware upgrades, configuration changes or traffic engineering.
In this dissertation, we first propose a novel infrastructure NICE (Network-wide Information Correlation and Exploration) that enables detection and troubleshooting of chronic network conditions by analyzing statistical correlations across multiple data sources. NICE uses a novel circular permutation test to determine the statistical significance of correlation. It also allows flexible analysis at various spatial granularity (e.g., link, router, network level, etc.). We validate NICE using real measurement data collected at a tier-1 ISP network. The results are quite positive. We then apply NICE to troubleshoot real network issues in the tier-1 ISP network. In all three case studies, NICE successfully uncovers previously unknown chronic network conditions, resulting in improved network operations.
Second, we extend NICE to detect and troubleshoot performance problems in IPTV networks. Compared to traditional ISP networks, IPTV distribution network typically adopts a different structure (tree-like multicast as opposed to mesh), imposes more restrictive service constraints (both in reliability and performance), and often faces a much larger scalability issue (managing millions of residential gateways versus thousands of provider-edge routers). Tailoring to the scale and structure of IPTV network, we propose a novel multi-resolution data analysis approach Giza that enables fast detection and localization of regions in the multicast tree hierarchy where the problem becomes significant. Furthermore, we develop several statistical data mining techniques to troubleshoot the identified problems and diagnose their root causes. Validation against operational experiences demonstrates the effectiveness of our approach in detecting important performance issues and identifying interesting dependencies.
Finally, we design and implement a novel infrastructure MERCURY for detecting the impact of network upgrades on performance. It is crucial to monitor the network when upgrades are made because they can have a significant impact on network performance and if not monitored may lead to unexpected consequences in operational networks. This can be achieved manually for a small number of devices, but does not scale to large networks with hundreds or thousands of routers and extremely large number of different upgrades made on a regular basis. MERCURY extracts interesting triggers from a large number of network maintenance activities. It then identifies behavior changes in network performance caused by the triggers. It uses statistical rule mining and network configuration to identify commonality across the behavior changes. We systematically evaluate MERCURY using data collected at a large tier-1 ISP network. By comparing to operational practice, we show that MERCURY is able to capture the interesting triggers and behavior changes induced by the triggers. In some cases, MERCURY also discovers previously unknown network behaviors demonstrating the effectiveness in identifying network conditions flying under the radar.Computer Science
Modeling In-Network Processing and Aggregation in Sensor Networks
The rapid advances in processor, memory and radio technology have enabled the development of distributed networks of sensor nodes capable of sensing and communicating using wireless media. The basic operation in sensor networks is the systematic gathering and transmission of sensed data to the end-user. The severe energy constraints and limited computing capabilities of the sensors present major challenges to its design. In this paper, I propose two new protocols DEEPADS (Distributed Energy-efficient Protocol for Aggregation of Data in Sensor Networks) and C-DEEPADS (Clustered-DEEPADS) that maximize the lifetime of the sensor network. Simulation results show that the protocols perform better than the existing approaches: Directed diffusion, LEACH, PEDAP and PEDAP-PA. The two-tier clustering approach C-DEEPADS is optimal in terms of maximizing the system lifetime as well as reducing the end-to-end latency
Game-based analysis of denial-of-service prevention protocols
Availability is a critical issue in modern distributed systems. While many techniques and protocols for preventing denial of service (DoS) attacks have been proposed and deployed in recent years, formal methods for analyzing and proving them correct have not kept up with the state of the art in DoS prevention. This paper proposes a new protocol for preventing malicious bandwidth consumption, and demonstrates how game-based formal methods can be successfully used to verify availability-related security properties of network protocols. We describe two classes of DoS attacks aimed at bandwidth consumption and resource exhaustion, respectively. We then propose our own protocol, based on a variant of client puzzles, to defend against bandwidth consumption, and use the JFKr key exchange protocol as an example of a protocol that defends against resource exhaustion attacks. We specify both protocols as alternating transition systems (ATS), state their security properties in alternatingtime temporal logic (ATL) and verify them using MOCHA, a model checker that has been previously used to analyze fair exchange protocols. 1
dFence: Transparent Network-based Denial of Service Mitigation
Denial of service (DoS) attacks are a growing threat to the availability of Internet services. We present dFence, a novel network-based defense system for mitigating DoS attacks. The main thesis of dFence is complete transparency to the existing Internet infrastructure with no software modifications at either routers, or the end hosts. dFence dynamically introduces special-purpose middlebox devices into the data paths of the hosts under attack. By intercepting both directions of IP traffic (to and from attacked hosts) and applying stateful defense policies, dFence middleboxes effectively mitigate a broad range of spoofed and unspoofed attacks. We describe the architecture of the dFence middlebox, mechanisms for ondemand introduction and removal, and DoS mitigation policies, including defenses against DoS attacks on the middlebox itself. We evaluate our prototype implementation based on Intel IXP network processors.
Processor scheduler for multi-service routers
In this paper, we describe the design and evaluation of a scheduler (referred to as Everest) for allocating processors to services in high performance, multi-service routers. A scheduler for such routers is required to maximize the number of packets processed within a given delay tolerance, while isolating the performance of services from each other. The design of such a scheduler is novel and challenging because of three domain-specific characteristics: (1) difficultto-predict and high packet arrival rates, (2) small delay tolerances of packets, and (3) significant overheads for switching allocation of processors from one service to another. These characteristics require that the scheduler be agile and wary simultaneously. Whereas agility enables the scheduler to react quickly to fluctuations in packet arrival rates, wariness prevents the scheduler from wasting computational resources in unnecessary context switches. We demonstrate that by balancing agility and wariness, Everest, as compared to conventional schedulers, reduces by more than an order of magnitude the average delay and the percentage of packets that experience delays greater than their tolerance. We describe a prototype implementation of Everest on Intel’s IXP2400 network processor. 1