556,883 research outputs found

    Intrusion Detection System using Bayesian Network Modeling

    Get PDF
    Computer Network Security has become a critical and important issue due to ever increasing cyber-crimes. Cybercrimes are spanning from simple piracy crimes to information theft in international terrorism. Defence security agencies and other militarily related organizations are highly concerned about the confidentiality and access control of the stored data. Therefore, it is really important to investigate on Intrusion Detection System (IDS) to detect and prevent cybercrimes to protect these systems. This research proposes a novel distributed IDS to detect and prevent attacks such as denial service, probes, user to root and remote to user attacks. In this work, we propose an IDS based on Bayesian network classification modelling technique. Bayesian networks are popular for adaptive learning, modelling diversity network traffic data for meaningful classification details. The proposed model has an anomaly based IDS with an adaptive learning process. Therefore, Bayesian networks have been applied to build a robust and accurate IDS. The proposed IDS has been evaluated against the KDD DAPRA dataset which was designed for network IDS evaluation. The research methodology consists of four different Bayesian networks as classification models, where each of these classifier models are interconnected and communicated to predict on incoming network traffic data. Each designed Bayesian network model is capable of detecting a major category of attack such as denial of service (DoS). However, all four Bayesian networks work together to pass the information of the classification model to calibrate the IDS system. The proposed IDS shows the ability of detecting novel attacks by continuing learning with different datasets. The testing dataset constructed by sampling the original KDD dataset to contain balance number of attacks and normal connections. The experiments show that the proposed system is effective in detecting attacks in the test dataset and is highly accurate in detecting all major attacks recorded in DARPA dataset. The proposed IDS consists with a promising approach for anomaly based intrusion detection in distributed systems. Furthermore, the practical implementation of the proposed IDS system can be utilized to train and detect attacks in live network traffi

    A Comparison of a Single Receiver and a Multi-Receiver Techniques to Mitigate Partial Band Interference

    Get PDF
    Many acoustic channels suffer from interference which is neither narrowband nor impulsive. This relatively long duration partial band interference can be particularly detrimental to system performance. We survey recent work in interference mitigation as background motivation to develop a spatial diversity receiver for use in underwater networks and compare this novel multi-receiver interference mitigation strategy with a recently developed single receiver interference mitigation algorithm using experimental data collected from the underwater acoustic network at the Atlantic Underwater Test and Evaluation Center. The network consists of multiple distributed cabled hydrophones that receive data transmitted over a time-varying multipath channel in the presence of partial band interference produced by interfering active sonar signals. In operational networks, many dropped messages are lost due to partial band interference which corrupts different portions of the received signal depending on the relative position of the interferers, information source and receivers due to the slow speed of propagation

    New information model that allows logical distribution of the control plane for software-defined networking : the distributed active information model (DAIM) can enable an effective distributed control plane for SDN with OpenFlow as the standard protocol

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.In recent years, technological innovations in communication networks, computing applications and information modelling have been increasing significantly in complexity and functionality driven by the needs of the modern world. As large-scale networks are becoming more complex and difficult to manage, traditional network management paradigms struggle to cope with traffic bottlenecks of the traditional switch and routing based networking deployments. Recently, there has been a growing movement led by both industry and academia aiming to develop mechanisms to reach a management paradigm that separates the control plane from the data plane. A new emerging network management paradigm called Software-Defined Networking (SDN) is an attempt to overcome the bottlenecks of traditional data networks. SDN offers a great potential to ease network management, and the OpenFlow protocol in particularly often referred to a radical new idea in networking. SDN adopts the concept of programmable networks which separate the control decisions from forwarding hardware and thus enabling the creation of a standardised programming interface. Flow computation is managed by a centralised controller with the switches only performing simple forwarding functions. This allows researchers to implement their protocols and algorithms to control data packets without impacting on the production network. Therefore, the emerging OpenFlow technology provides more flexible control of networks infrastructure, are cost effective, open and programmable components of network architecture. SDN is very efficient at moving the computational load away from the forwarding plane and into a centralised controller, but a physically centralised controller can represent a single point of failure for the entire network. This centralisation approach brings optimality, however, it creates additional problems of its own including single-domain restriction, scalability, robustness and the ability for switches to adapt well to changes in local environments. This research aims at developing a new distributed active information model (DAIM) to allow programmability of network elements and local decision-making processes that will essentially contribute to complex distributed networks. DAIM offers adaptation algorithms embedded with intelligent information objects to be applied to such complex systems. By applying the DAIM model and these adaptation algorithms, managing complex systems in any distributed network environment can become scalable, adaptable and robust. The DAIM model is integrated into the SDN architecture at the level of switches to provide a logically distributed control plane that can manage the flow setups. The proposal moves the computational load to the switches, which allows them to adapt dynamically according to real-time demands and needs. The DAIM model can enhance information objects and network devices to make their local decisions through its active performance, and thus significantly reduce the workload of a centralised SDN/OpenFlow controller. In addition to the introduction (Chapter 1) and the comprehensive literature reviews (Chapter 2), the first part of this dissertation (Chapter 3) presents the theoretical foundation for the rest of the dissertation. This foundation is comprised of the logically distributed control plane for SDN networks, an efficient DAIM model framework inspired by the O:MIB and hybrid O:XML semantics, as well as the necessary architecture to aggregate the distribution of network information. The details of the DAIM model including design, structure and packet forwarding process are also described. The DAIM software specification and its implementation are demonstrated in the second part of the thesis (Chapter 4). The DAIM model is developed in the C++ programming language using free and open source NetBeans IDE. In more detail, the three core modules that construct the DAIM ecosystem are discussed with some sample code reviews and flowchart diagrams of the implemented algorithms. To show DAIM’s feasibility, a small-size OpenFlow lab based on Raspberry Pi’s has been set up physically to check the compliance of the system with its purpose and functions. Various tasks and scenarios are demonstrated to verify the functionalities of DAIM such as executing a ping command, streaming media and transferring files between hosts. These scenarios are created based on OpenVswitch in a virtualised network using Mininet. The third part (Chapter 5) presents the performance evaluation of the DAIM model, which is defined by four characteristics: round-trip-time, throughput, latency and bandwidth. The ping command is used to measure the mean RTT between two IP hosts. The flow setup throughput and latency of the DAIM controller are measured by using Cbench. Also, Iperf is the tool used to measure the available bandwidth of the network. The performance of the distributed DAIM model has been tested and good results are reported when compared with current OpenFlow controllers including NOX, POX and NOX-MT. The comparisons reveal that DAIM can outperform both NOX and POX controllers. The DAIM’s performance in a physical OpenFlow test lab and other parameters that can affect the performance evaluation are also discussed. Because decentralisation is an essential element of autonomic systems, building a distributed computing environment by DAIM can consequently enable the development of autonomic management strategies. The experiment results show the DAIM model can be one of the architectural approaches to creating the autonomic service management for SDN. The DAIM model can be utilised to investigate the functionalities required by the autonomic networking within the ACNs community. This efficient DAIM model can be further applied to enable adaptability and autonomy to other distributed networks such as WSNs, P2P and Ad-Hoc sensor networks

    Complex queries over decentralised systems for geodata retrieval

    Get PDF
    none4sìDecentralised systems have been proved to be quite effective to allow for trusted and accountable data sharing, without the need to resort to a centralised party that collects all the information. While complete decentralisation provides important advantages in terms of data sovereignty, absence of bottlenecks and reliability, it also adds some issues concerned with efficient data lookup and the possibility to implement complex queries without reintroducing centralised components. In this paper, we describe a system that copes with these issues, thanks to a multi-layer lookup scheme based on Distributed Hash Tables that allows for multiple keyword-based searches. The service of peer nodes participating in this discovery service is controlled and rewarded for their contribution. Moreover, the governance of this process is completely automated through the use of smart contracts, thus building a Decentralised Autonomous Organization (DAO). Finally, we present a use case where road hazards are collected in order to test the goodness of our system for geodata retrieval. Then, we show results from a performance evaluation that confirm the viability of the proposal. © 2022 The Authors. IET Networks published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. openZichichi, Mirko; Serena, Luca; Ferretti, Stefano; D'Angelo, GabrieleZichichi, Mirko; Serena, Luca; Ferretti, Stefano; D'Angelo, Gabriel

    ARTMAP Neural Networks for Information Fusion and Data Mining: Map Production and Target Recognition Methodologies

    Full text link
    The Sensor Exploitation Group of MIT Lincoln Laboratory incorporated an early version of the ARTMAP neural network as the recognition engine of a hierarchical system for fusion and data mining of registered geospatial images. The Lincoln Lab system has been successfully fielded, but is limited to target I non-target identifications and does not produce whole maps. Procedures defined here extend these capabilities by means of a mapping method that learns to identify and distribute arbitrarily many target classes. This new spatial data mining system is designed particularly to cope with the highly skewed class distributions of typical mapping problems. Specification of canonical algorithms and a benchmark testbed has enabled the evaluation of candidate recognition networks as well as pre- and post-processing and feature selection options. The resulting mapping methodology sets a standard for a variety of spatial data mining tasks. In particular, training pixels are drawn from a region that is spatially distinct from the mapped region, which could feature an output class mix that is substantially different from that of the training set. The system recognition component, default ARTMAP, with its fully specified set of canonical parameter values, has become the a priori system of choice among this family of neural networks for a wide variety of applications.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); Office of Naval Research (N00014-01-1-0624

    Design & Evaluation of Path-based Reputation System for MANET Routing

    Get PDF
    Most of the existing reputation systems in mobile ad hoc networks (MANET) consider only node reputations when selecting routes. Reputation and trust are therefore generally ensured within a one-hop distance when routing decisions are made, which often fail to provide the most reliable, trusted route. In this report, we first summarize the background studies on the security of MANET. Then, we propose a system that is based on path reputation, which is computed from reputation and trust values of each and every node in the route. The use of path reputation greatly enhances the reliability of resulting routes. The detailed system architecture and components design of the proposed mechanism are carefully described on top of the AODV (Ad-hoc On-demand Distance Vector) routing protocol. We also evaluate the performance of the proposed system by simulating it on top of AODV. Simulation experiments show that the proposed scheme greatly improves network throughput in the midst of misbehavior nodes while requires very limited message overhead. To our knowledge, this is the first path-based reputation system proposal that may be implemented on top of a non-source based routing scheme such as AODV

    Quality assessment technique for ubiquitous software and middleware

    Get PDF
    The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future
    • …
    corecore