540 research outputs found

    From simulation to statistical analysis: timeliness assessment of ethernet/IP-based distributed systems

    Get PDF
    A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. In the past few years, it is particularly significant the considerable amount of work that has been devoted to the timing analysis of Ethernet-based technologies. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness at a holistic level. To this end, we are addressing a few inter-linked research topics with the purpose of setting a framework for the development of tools suitable to extract temporal properties of Commercial-Off-The-Shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is being applied to a specific COTS technology, Ethernet/IP. In this paper, we reason about the modelling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide usable results. Discrete event simulation models of a distributed system can be a powerful tool for the timeliness evaluation of the overall system, but particular care must be taken with the results provided by traditional statistical analysis techniques

    Towards Distributed and Adaptive Detection and Localisation of Network Faults

    Get PDF
    We present a statistical probing-approach to distributed fault-detection in networked systems, based on autonomous configuration of algorithm parameters. Statistical modelling is used for detection and localisation of network faults. A detected fault is isolated to a node or link by collaborative fault-localisation. From local measurements obtained through probing between nodes, probe response delay and packet drop are modelled via parameter estimation for each link. Estimated model parameters are used for autonomous configuration of algorithm parameters, related to probe intervals and detection mechanisms. Expected fault-detection performance is formulated as a cost instead of specific parameter values, significantly reducing configuration efforts in a distributed system. The benefit offered by using our algorithm is fault-detection with increased certainty based on local measurements, compared to other methods not taking observed network conditions into account. We investigate the algorithm performance for varying user parameters and failure conditions. The simulation results indicate that more than 95 % of the generated faults can be detected with few false alarms. At least 80 % of the link faults and 65 % of the node faults are correctly localised. The performance can be improved by parameter adjustments and by using alternative paths for communication of algorithm control messages

    Long-term adaptation and distributed detection of local network changes

    Get PDF
    We present a statistical approach to distributed detection of local latency shifts in networked systems. For this purpose, response delay measurements are performed between neighbouring nodes via probing. The expected probe response delay on each connection is statistically modelled via parameter estimation. Adaptation to drifting delays is accounted for by the use of overlapping models, such that previous models are partially used as input to future models. Based on the symmetric Kullback-Leibler divergence metric, latency shifts can be detected by comparing the estimated parameters of the current and previous models. In order to reduce the number of detection alarms, thresholds for divergence and convergence are used. The method that we propose can be applied to many types of statistical distributions, and requires only constant memory compared to e.g., sliding window techniques and decay functions. Therefore, the method is applicable in various kinds of network equipment with limited capacity, such as sensor networks, mobile ad hoc networks etc. We have investigated the behaviour of the method for different model parameters. Further, we have tested the detection performance in network simulations, for both gradual and abrupt shifts in the probe response delay. The results indicate that over 90% of the shifts can be detected. Undetected shifts are mainly the effects of long convergence processes triggered by previous shifts. The overall performance depends on the characteristics of the shifts and the configuration of the model parameters

    Architecture and Implementation of a Trust Model for Pervasive Applications

    Get PDF
    Collaborative effort to share resources is a significant feature of pervasive computing environments. To achieve secure service discovery and sharing, and to distinguish between malevolent and benevolent entities, trust models must be defined. It is critical to estimate a device\u27s initial trust value because of the transient nature of pervasive smart space; however, most of the prior research work on trust models for pervasive applications used the notion of constant initial trust assignment. In this paper, we design and implement a trust model called DIRT. We categorize services in different security levels and depending on the service requester\u27s context information, we calculate the initial trust value. Our trust value is assigned for each device and for each service. Our overall trust estimation for a service depends on the recommendations of the neighbouring devices, inference from other service-trust values for that device, and direct trust experience. We provide an extensive survey of related work, and we demonstrate the distinguishing features of our proposed model with respect to the existing models. We implement a healthcare-monitoring application and a location-based service prototype over DIRT. We also provide a performance analysis of the model with respect to some of its important characteristics tested in various scenarios

    An initial approach to distributed adaptive fault-handling in networked systems

    Get PDF
    We present a distributed adaptive fault-handling algorithm applied in networked systems. The probabilistic approach that we use makes the proposed method capable of adaptively detect and localize network faults by the use of simple end-to-end test transactions. Our method operates in a fully distributed manner, such that each network element detects faults using locally extracted information as input. This allows for a fast autonomous adaption to local network conditions in real-time, with significantly reduced need for manual configuration of algorithm parameters. Initial results from a small synthetically generated network indicate that satisfactory algorithm performance can be achieved, with respect to the number of detected and localized faults, detection time and false alarm rate

    A simulation model for evaluating national patient record networks in South Africa.

    Get PDF
    Includes abstract.Includes bibliographical references.This study has shown that modelling and simulation is a feasible approach for evaluating NPR solutions in the developing context. The model can represent different network models, patient types and performance metrics to aid in the evaluation of NPR solutions. Using the current model, more case studies can be investigated for various public health issues - such as the impact of disease or regional services planning

    Content source selection in Bluetooth networks

    Get PDF
    Large scale market penetration of electronic devices equipped with Bluetooth technology now gives the ability to share content (such as music or video clips) between members of the public in a decentralised manner. Achieved using opportunistic connections, formed when they are colocated, in environments where Internet connectivity is expensive or unreliable, such as urban buses, train rides and coffee shops. Most people have a high degree of regularity in their movements (such as a daily commute), including repeated contacts with others possessing similar seasonal movement patterns. We argue that this behaviour can be exploited in connection selection, and outline a system for the identification of long-term companions and sources that have previously provided quality content, in order to maximise the successful receipt of content files. We utilise actual traces and existing mobility models to validate our approach, and show how consideration of the colocation history and the quality of previous data transfers leads to more successful sharing of content in realistic scenarios

    Multi-level network resilience: traffic analysis, anomaly detection and simulation

    Get PDF
    Traffic analysis and anomaly detection have been extensively used to characterize network utilization as well as to identify abnormal network traffic such as malicious attacks. However, so far, techniques for traffic analysis and anomaly detection have been carried out independently, relying on mechanisms and algorithms either in edge or in core networks alone. In this paper we propose the notion of multi-level network resilience, in order to provide a more buy pill robust traffic analysis and anomaly detection architecture, combining mechanisms and algorithms operating in a coordinated fashion both in the edge and in the core networks. This work is motivated by the potential complementarities between the research being developed at IIT Madras and Lancaster University. In this paper we describe the current work being developed at IIT Madras and Lancaster on traffic analysis and anomaly detection, and outline the principles of a multi-level resilience architecture

    New information model that allows logical distribution of the control plane for software-defined networking : the distributed active information model (DAIM) can enable an effective distributed control plane for SDN with OpenFlow as the standard protocol

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.In recent years, technological innovations in communication networks, computing applications and information modelling have been increasing significantly in complexity and functionality driven by the needs of the modern world. As large-scale networks are becoming more complex and difficult to manage, traditional network management paradigms struggle to cope with traffic bottlenecks of the traditional switch and routing based networking deployments. Recently, there has been a growing movement led by both industry and academia aiming to develop mechanisms to reach a management paradigm that separates the control plane from the data plane. A new emerging network management paradigm called Software-Defined Networking (SDN) is an attempt to overcome the bottlenecks of traditional data networks. SDN offers a great potential to ease network management, and the OpenFlow protocol in particularly often referred to a radical new idea in networking. SDN adopts the concept of programmable networks which separate the control decisions from forwarding hardware and thus enabling the creation of a standardised programming interface. Flow computation is managed by a centralised controller with the switches only performing simple forwarding functions. This allows researchers to implement their protocols and algorithms to control data packets without impacting on the production network. Therefore, the emerging OpenFlow technology provides more flexible control of networks infrastructure, are cost effective, open and programmable components of network architecture. SDN is very efficient at moving the computational load away from the forwarding plane and into a centralised controller, but a physically centralised controller can represent a single point of failure for the entire network. This centralisation approach brings optimality, however, it creates additional problems of its own including single-domain restriction, scalability, robustness and the ability for switches to adapt well to changes in local environments. This research aims at developing a new distributed active information model (DAIM) to allow programmability of network elements and local decision-making processes that will essentially contribute to complex distributed networks. DAIM offers adaptation algorithms embedded with intelligent information objects to be applied to such complex systems. By applying the DAIM model and these adaptation algorithms, managing complex systems in any distributed network environment can become scalable, adaptable and robust. The DAIM model is integrated into the SDN architecture at the level of switches to provide a logically distributed control plane that can manage the flow setups. The proposal moves the computational load to the switches, which allows them to adapt dynamically according to real-time demands and needs. The DAIM model can enhance information objects and network devices to make their local decisions through its active performance, and thus significantly reduce the workload of a centralised SDN/OpenFlow controller. In addition to the introduction (Chapter 1) and the comprehensive literature reviews (Chapter 2), the first part of this dissertation (Chapter 3) presents the theoretical foundation for the rest of the dissertation. This foundation is comprised of the logically distributed control plane for SDN networks, an efficient DAIM model framework inspired by the O:MIB and hybrid O:XML semantics, as well as the necessary architecture to aggregate the distribution of network information. The details of the DAIM model including design, structure and packet forwarding process are also described. The DAIM software specification and its implementation are demonstrated in the second part of the thesis (Chapter 4). The DAIM model is developed in the C++ programming language using free and open source NetBeans IDE. In more detail, the three core modules that construct the DAIM ecosystem are discussed with some sample code reviews and flowchart diagrams of the implemented algorithms. To show DAIM’s feasibility, a small-size OpenFlow lab based on Raspberry Pi’s has been set up physically to check the compliance of the system with its purpose and functions. Various tasks and scenarios are demonstrated to verify the functionalities of DAIM such as executing a ping command, streaming media and transferring files between hosts. These scenarios are created based on OpenVswitch in a virtualised network using Mininet. The third part (Chapter 5) presents the performance evaluation of the DAIM model, which is defined by four characteristics: round-trip-time, throughput, latency and bandwidth. The ping command is used to measure the mean RTT between two IP hosts. The flow setup throughput and latency of the DAIM controller are measured by using Cbench. Also, Iperf is the tool used to measure the available bandwidth of the network. The performance of the distributed DAIM model has been tested and good results are reported when compared with current OpenFlow controllers including NOX, POX and NOX-MT. The comparisons reveal that DAIM can outperform both NOX and POX controllers. The DAIM’s performance in a physical OpenFlow test lab and other parameters that can affect the performance evaluation are also discussed. Because decentralisation is an essential element of autonomic systems, building a distributed computing environment by DAIM can consequently enable the development of autonomic management strategies. The experiment results show the DAIM model can be one of the architectural approaches to creating the autonomic service management for SDN. The DAIM model can be utilised to investigate the functionalities required by the autonomic networking within the ACNs community. This efficient DAIM model can be further applied to enable adaptability and autonomy to other distributed networks such as WSNs, P2P and Ad-Hoc sensor networks
    corecore