483 research outputs found

    \STATMOND: A Peer-To-Peer Status And Performance Monitor For Dynamic Resource Allocation On Parallel Computers

    Get PDF
    This thesis presents a decentralized tool STATMOND - to monitor the status of a peer-to-peer network. STATMOND provides an accurate measurement scheme for parameters such as CPU load and memory utilization on Linux clusters. The services of STATMOND are ubiquitous in that each computer measures and for- wards its data over the network and also maintains the data of other nodes in memory. The data are periodically updated, and users on any node can ‘see‘ the status and performance of the network based on these parameters. This thesis describes the problems confronting cluster computing, the necessity of monitoring tools and how STATMOND can be a step towards better allocation of resources for dynamic computing

    Adaptive Response System for Distributed Denial-of-Service Attacks

    No full text
    The continued prevalence and severe damaging effects of the Distributed Denial of Service (DDoS) attacks in today’s Internet raise growing security concerns and call for an immediate response to come up with better solutions to tackle DDoS attacks. The current DDoS prevention mechanisms are usually inflexible and determined attackers with knowledge of these mechanisms, could work around them. Most existing detection and response mechanisms are standalone systems which do not rely on adaptive updates to mitigate attacks. As different responses vary in their “leniency” in treating detected attack traffic, there is a need for an Adaptive Response System. We designed and implemented our DDoS Adaptive ResponsE (DARE) System, which is a distributed DDoS mitigation system capable of executing appropriate detection and mitigation responses automatically and adaptively according to the attacks. It supports easy integrations for both signature-based and anomaly-based detection modules. Additionally, the design of DARE’s individual components takes into consideration the strengths and weaknesses of existing defence mechanisms, and the characteristics and possible future mutations of DDoS attacks. These components consist of an Enhanced TCP SYN Attack Detector and Bloom-based Filter, a DDoS Flooding Attack Detector and Flow Identifier, and a Non Intrusive IP Traceback mechanism. The components work together interactively to adapt the detections and responses in accordance to the attack types. Experiments conducted on DARE show that the attack detection and mitigation are successfully completed within seconds, with about 60% to 86% of the attack traffic being dropped, while availability for legitimate and new legitimate requests is maintained. DARE is able to detect and trigger appropriate responses in accordance to the attacks being launched with high accuracy, effectiveness and efficiency. We also designed and implemented a Traffic Redirection Attack Protection System (TRAPS), a stand-alone DDoS attack detection and mitigation system for IPv6 networks. In TRAPS, the victim under attack verifies the authenticity of the source by performing virtual relocations to differentiate the legitimate traffic from the attack traffic. TRAPS requires minimal deployment effort and does not require modifications to the Internet infrastructure due to its incorporation of the Mobile IPv6 protocol. Experiments to test the feasibility of TRAPS were carried out in a testbed environment to verify that it would work with the existing Mobile IPv6 implementation. It was observed that the operations of each module were functioning correctly and TRAPS was able to successfully mitigate an attack launched with spoofed source IP addresses

    An Integrated Methodology for Creating Composed Web/Grid Services

    Get PDF
    This thesis presents an approach to design, specify, validate, verify, implement, and evaluate composed web/grid services. Web and grid services can be composed to create new services with complex behaviours. The BPEL (Business Process Execution Language) standard was created to enable the orchestration of web services, but there have also been investigation of its use for grid services. BPEL specifies the implementation of service composition but has no formal semantics; implementations are in practice checked by testing. Formal methods are used in general to define an abstract model of system behaviour that allows simulation and reasoning about properties. The approach can detect and reduce potentially costly errors at design time. CRESS (Communication Representation Employing Systematic Specification) is a domainindependent, graphical, abstract notation, and integrated toolset for developing composite web service. The original version of CRESS had automated support for formal specification in LOTOS (Language Of Temporal Ordering Specification), executing formal validation with MUSTARD (Multiple-Use Scenario Testing and Refusal Description), and implementing in BPEL4WS as the early version of BPEL standard. This thesis work has extended CRESS and its integrated tools to design, specify, validate, verify, implement, and evaluate composed web/grid services. The work has extended the CRESS notation to support a wider range of service compositions, and has applied it to grid services as a new domain. The thesis presents two new tools, CLOVE (CRESS Language-Oriented Verification Environment) and MINT (MUSTARD Interpreter), to respectively support formal verification and implementation testing. New work has also extended CRESS to automate implementation of composed services using the more recent BPEL standard WS-BPEL 2.0

    Big-Data Solutions for Manufacturing Health Monitoring and Log Analytics

    Get PDF
    Modern semiconductor manufacturing is a complex process with a multitude of software applications. This application landscape has to be constantly monitored, since the communication and access patterns provide important insights. Because of the high event rates of the equipment log data stream in modern factories, big-data tools are required for scalable state and history analytics. The choice of suitable big-data solutions and their technical realization remains a challenging task. This thesis compares big-data architectures and discovers solutions for log-data ingest, enrichment, analytics and visualization. Based on the use cases and requirements of developers working in this field, a comparison of a custom assembled stack and a complete solution is made. Since the complete stack is a preferable solution, Datadog, Grafana Loki and the Elastic 8 Stack are selected for a more detailed study. These three systems are implemented and compared based on the requirements. All three systems are well suited for big-data logging and fulfill most of the requirements, but show different capabilities when implemented and used.:1 Introduction 1.1 Motivation 1.2 Structure 2 Fundamentals and Prerequisites 2.1 Logging 2.1.1 Log level 2.1.2 CSFW log 2.1.3 SECS log 2.2 Existing system and data 2.2.1 Production process 2.2.2 Log data in numbers 2.3 Requirements 2.3.1 Functional requirements 2.3.2 System requirements 2.3.3 Quality requirements 2.4 Use Cases 2.4.1 Finding specific communication sequence 2.4.2 Watching system changes 2.4.3 Comparison with expected production path 2.4.4 Enrichment with metadata 2.4.5 Decoupled log analysis 3 State of the Art and Potential Software Stacks 3.1 State of the art software stacks 3.1.1 IoT flow monitoring system 3.1.2 Big-Data IoT monitoring system 3.1.3 IoT Cloud Computing Stack 3.1.4 Big-Data Logging Architecture 3.1.5 IoT Energy Conservation System 3.1.6 Similarities of the architectures 3.2 Selection of software stack 3.2.1 Components for one layer 3.2.2 Software solutions for the stack 4 Analysis and Implementation 4.1 Full stack vs. a custom assembled stack 4.1.1 Drawbacks of a custom assembled stack 4.1.2 Advantages of a complete solution 4.1.3 Exclusion of a custom assembled stack 4.2 Selection of full stack solutions 4.2.1 Elastic vs. Amazon 4.2.2 Comparison of Cloud-Only-Solutions 4.2.3 Comparison of On-Premise-Solutions 4.3 Implementation of selected solutions 4.3.1 Datadog 4.3.2 Grafana Loki Stack 4.3.3 Elastic 8 Stack 5 Comparison 5.1 Comparison of components 5.1.1 Collection 5.1.2 Analysis 5.1.3 Visualization 5.2 Comparison of requirements 5.2.1 Functional requirements 5.2.2 System requirements 5.2.3 Quality requirements 5.3 Results 6 Conclusion and Future Work 6.1 Conclusion 6.2 Future WorkDie moderne Halbleiterfertigung ist ein komplexer Prozess mit einer Vielzahl von Softwareanwendungen. Diese Anwendungslandschaft muss stĂ€ndig ĂŒberwacht werden, da die Kommunikations- und Zugriffsmuster wichtige Erkenntnisse liefern. Aufgrund der hohen Ereignisraten des Logdatenstroms der Maschinen in modernen Fabriken werden Big-Data-Tools fĂŒr skalierbare Zustands- und Verlaufsanalysen benötigt. Die Auswahl geeigneter Big-Data-Lösungen und deren technische Umsetzung ist eine anspruchsvolle Aufgabe. Diese Arbeit vergleicht Big-Data-Architekturen und untersucht Lösungen fĂŒr das Sammeln, Anreicherung, Analyse und Visualisierung von Log-Daten. Basierend auf den Use Cases und den Anforderungen von Entwicklern, die in diesem Bereich arbeiten, wird ein Vergleich zwischen einem individuell zusammengestellten Stack und einer Komplettlösung vorgenommen. Da die Komplettlösung vorteilhafter ist, werden Datadog, Grafana Loki und der Elastic 8 Stack fĂŒr eine genauere Untersuchung ausgewĂ€hlt. Diese drei Systeme werden auf der Grundlage der Anforderungen implementiert und verglichen. Alle drei Systeme eignen sich gut fĂŒr Big-Data-Logging und erfĂŒllen die meisten Anforderungen, zeigen aber unterschiedliche FĂ€higkeiten bei der Implementierung und Nutzung.:1 Introduction 1.1 Motivation 1.2 Structure 2 Fundamentals and Prerequisites 2.1 Logging 2.1.1 Log level 2.1.2 CSFW log 2.1.3 SECS log 2.2 Existing system and data 2.2.1 Production process 2.2.2 Log data in numbers 2.3 Requirements 2.3.1 Functional requirements 2.3.2 System requirements 2.3.3 Quality requirements 2.4 Use Cases 2.4.1 Finding specific communication sequence 2.4.2 Watching system changes 2.4.3 Comparison with expected production path 2.4.4 Enrichment with metadata 2.4.5 Decoupled log analysis 3 State of the Art and Potential Software Stacks 3.1 State of the art software stacks 3.1.1 IoT flow monitoring system 3.1.2 Big-Data IoT monitoring system 3.1.3 IoT Cloud Computing Stack 3.1.4 Big-Data Logging Architecture 3.1.5 IoT Energy Conservation System 3.1.6 Similarities of the architectures 3.2 Selection of software stack 3.2.1 Components for one layer 3.2.2 Software solutions for the stack 4 Analysis and Implementation 4.1 Full stack vs. a custom assembled stack 4.1.1 Drawbacks of a custom assembled stack 4.1.2 Advantages of a complete solution 4.1.3 Exclusion of a custom assembled stack 4.2 Selection of full stack solutions 4.2.1 Elastic vs. Amazon 4.2.2 Comparison of Cloud-Only-Solutions 4.2.3 Comparison of On-Premise-Solutions 4.3 Implementation of selected solutions 4.3.1 Datadog 4.3.2 Grafana Loki Stack 4.3.3 Elastic 8 Stack 5 Comparison 5.1 Comparison of components 5.1.1 Collection 5.1.2 Analysis 5.1.3 Visualization 5.2 Comparison of requirements 5.2.1 Functional requirements 5.2.2 System requirements 5.2.3 Quality requirements 5.3 Results 6 Conclusion and Future Work 6.1 Conclusion 6.2 Future Wor

    Locating Network Domain Entry and Exit point/path for DDoS Attack Traffic

    No full text
    A method to determine entry and exit points or paths of DDoS attack traffic flows into and out of network domains is proposed. We observe valid source addresses seen by routers from sampled traffic under non-attack conditions. Under attack conditions, we detect route anomalies by determining which routers have been used for unknown source addresses, to construct the attack paths. We consider deployment issues and show results from simulations to prove the feasibility of our scheme. We then implement our Traceback mechanism in C++ and more realistic experiments are conducted. The experiments show that accurate results, with high traceback speed of a few seconds, are achieved. Compared to existing techniques, our approach is non-intrusive, not requiring any changes to the Internet routers and data packets. Precise information regarding the attack is not required allowing a wide variety of DDoS attack detection techniques to be used. The victim is also relieved from the traceback task during an attack. The scheme is simple and efficient, allowing for a fast traceback, and scalable due to the distribution of processing workload. © 2009 IEEE.Accepted versio

    Adaptive response system for distributed denial-of-service attacks

    No full text
    Accepted versio

    NFV service dynamicity with a DevOps approach : Insights from a use-case realization

    Get PDF
    This experience paper describes the process of leveraging the NFV orchestration platform built in the EU FP7 project UNIFY to deploy a dynamic network service exemplified by an elastic router. Elasticity is realized by scaling dataplane resources as a function of traffic load. To achieve this, the service includes a custom scaling logic and monitoring capabilities. An automated monitoring framework not only triggers elastic scaling, but also a troubleshooting process which detects and analyzes anomalies, pro-actively aiding both dev and ops personnel. Such a DevOps-inspired approach enables a shorter update cycle to the running service. We highlight multiple learnings yielded throughout the prototype realization, focussing on the functional areas of service decomposition and scaling; programmable monitoring; and automated troubleshooting. Such practical insights will contribute to solving challenges such as agile deployment and efficient resource usage in future NFV platforms

    Self-adaptive Grid Resource Monitoring and discovery

    Get PDF
    The Grid provides a novel platform where the scientific and engineering communities can share data and computation across multiple administrative domains. There are several key services that must be offered by Grid middleware; one of them being the Grid Information Service( GIS). A GIS is a Grid middleware component which maintains information about hardware, software, services and people participating in a virtual organisation( VO). There is an inherent need in these systems for the delivery of reliable performance. This thesis describes a number of approaches which detail the development and application of a suite of benchmarks for the prediction of the process of resource discovery and monitoring on the Grid. A series of experimental studies of the characterisation of performance using benchmarking, are carried out. Several novel predictive algorithms are presented and evaluated in terms of their predictive error. Furthermore, predictive methods are developed which describe the behaviour of MDS2 for a variable number of user requests. The MDS is also extended to include job information from a local scheduler; this information is queried using requests of greatly varying complexity. The response of the MDS to these queries is then assessed in terms of several performance metrics. The benchmarking of the dynamic nature of information within MDS3 which is based on the Open Grid Services Architecture (OGSA), and also the successor to MDS2, is also carried out. The performance of both the pull and push query mechanisms is analysed. GridAdapt (Self-adaptive Grid Resource Monitoring) is a new system that is proposed, built upon the Globus MDS3 benchmarking. It offers self-adaptation, autonomy and admission control at the Index Service, whilst ensuring that the MIDS is not overloaded and can meet its quality-of-service,f or example,i n terms of its average response time for servicing synchronous queries and the total number of queries returned per unit time
    • 

    corecore