394,304 research outputs found

    Accurate and Resource-Efficient Monitoring for Future Networks

    Get PDF
    Monitoring functionality is a key component of any network management system. It is essential for profiling network resource usage, detecting attacks, and capturing the performance of a multitude of services using the network. Traditional monitoring solutions operate on long timescales producing periodic reports, which are mostly used for manual and infrequent network management tasks. However, these practices have been recently questioned by the advent of Software Defined Networking (SDN). By empowering management applications with the right tools to perform automatic, frequent, and fine-grained network reconfigurations, SDN has made these applications more dependent than before on the accuracy and timeliness of monitoring reports. As a result, monitoring systems are required to collect considerable amounts of heterogeneous measurement data, process them in real-time, and expose the resulting knowledge in short timescales to network decision-making processes. Satisfying these requirements is extremely challenging given today’s larger network scales, massive and dynamic traffic volumes, and the stringent constraints on time availability and hardware resources. This PhD thesis tackles this important challenge by investigating how an accurate and resource-efficient monitoring function can be realised in the context of future, software-defined networks. Novel monitoring methodologies, designs, and frameworks are provided in this thesis, which scale with increasing network sizes and automatically adjust to changes in the operating conditions. These achieve the goal of efficient measurement collection and reporting, lightweight measurement- data processing, and timely monitoring knowledge delivery

    Computing in the RAIN: a reliable array of independent nodes

    Get PDF
    The RAIN project is a research collaboration between Caltech and NASA-JPL on distributed computing and data-storage systems for future spaceborne missions. The goal of the project is to identify and develop key building blocks for reliable distributed systems built with inexpensive off-the-shelf components. The RAIN platform consists of a heterogeneous cluster of computing and/or storage nodes connected via multiple interfaces to networks configured in fault-tolerant topologies. The RAIN software components run in conjunction with operating system services and standard network protocols. Through software-implemented fault tolerance, the system tolerates multiple node, link, and switch failures, with no single point of failure. The RAIN-technology has been transferred to Rainfinity, a start-up company focusing on creating clustered solutions for improving the performance and availability of Internet data centers. In this paper, we describe the following contributions: 1) fault-tolerant interconnect topologies and communication protocols providing consistent error reporting of link failures, 2) fault management techniques based on group membership, and 3) data storage schemes based on computationally efficient error-control codes. We present several proof-of-concept applications: a highly-available video server, a highly-available Web server, and a distributed checkpointing system. Also, we describe a commercial product, Rainwall, built with the RAIN technology

    Improved Performance of Network Attack Detection using Combination Data Mining Techniques

    Get PDF
    Network Attack detection is very important mechanism for detecting attack in computer networks. Data mining techniques play very important role in detecting intrusions in computer networks. Intrusions can damage to the data and compromise integrity and confidentiality and availability of the data. Intrusions are the activities that violate the security policy of system. Intrusion Detection is the process used to identify network attack. Network security is to be considered as a major issue in recent years, since the computer network keeps on expanding every day. A Network Attack Detection System (NADS) is a system for detecting intrusions and reporting to the authority or to the network administration. Data mining techniques have been applied in many fields like Network Management, Education, Science, Business, Manufacturing, Process control, and Fraud Detection. Data mining algorithms like J48, Randam Forest ,Random Tree, Hoefding Tree and Rep Tree are used to build intrusion detection models using KDD CUP 1999. The performance of network attack detection model is evaluated using KDD CUP 1999 test dataset using series of experiments and measured using correct classi?cation and detection of attack. The combination of data mining algorithm will increase performance of network attack detection i.e false positive and false negative, novel or unknown attacks

    Characterizing Service Level Objectives for Cloud Services: Motivation of Short-Term Cache Allocation Performance Modeling

    Get PDF
    Service level objectives (SLOs) stipulate performance goals for cloud applications, microservices, and infrastructure. SLOs are widely used, in part, because system managers can tailor goals to their products, companies, and workloads. Systems research intended to support strong SLOs should target realistic performance goals used by system managers in the field. Evaluations conducted with uncommon SLO goals may not translate to real systems. Some textbooks discuss the structure of SLOs but (1) they only sketch SLO goals and (2) they use outdated examples. We mined real SLOs published on the web, extracted their goals and characterized them. Many web documents discuss SLOs loosely but few provide details and reflect real settings. Systematic literature review (SLR) prunes results and reduces bias by (1) modeling expected SLO structure and (2) detecting and removing outliers. We collected 75 SLOs where response time, query percentile and reporting period were specified. We used these SLOs to confirm and refute common perceptions. For example, we found few SLOs with response time guarantees below 10 ms for 90% or more queries. This reality bolsters perceptions that single digit SLOs face fundamental research challenges.This work was funded by NSF Grants 1749501 and 1350941.No embargoAcademic Major: Computer Science and EngineeringAcademic Major: Financ

    IT service management: towards a contingency theory of performance measurement

    Get PDF
    Information Technology Service Management (ITSM) focuses on IT service creation, design, delivery and maintenance. Measurement is one of the basic underlying elements of service science and this paper contributes to service science by focussing on the selection of performance metrics for ITSM. Contingency theory is used to provide a theoretical foundation for the study. Content analysis of interviews of ITSM managers at six organisations revealed that selection of metrics is influenced by a discrete set of factors. Three categories of factors were identified: external environment, parent organisationand IS organisation. For individual cases, selection of metrics was contingent on factors such as organisation culture, management philosophy and perspectives, legislation, industry sector, and customers, although a common set of four factors influenced selection of metrics across all organisations. A strong link was identified between the use of a corporate performance framework and clearly articulated ITSM metrics
    corecore