1,370 research outputs found

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols

    Threshold Verification Technique for Network Intrusion Detection System

    Get PDF
    Internet has played a vital role in this modern world, the possibilities and opportunities offered are limitless. Despite all the hype, Internet services are liable to intrusion attack that could tamper the confidentiality and integrity of important information. An attack started with gathering the information of the attack target, this gathering of information activity can be done as either fast or slow attack. The defensive measure network administrator can take to overcome this liability is by introducing Intrusion Detection Systems (IDSs) in their network. IDS have the capabilities to analyze the network traffic and recognize incoming and on-going intrusion. Unfortunately the combination of both modules in real time network traffic slowed down the detection process. In real time network, early detection of fast attack can prevent any further attack and reduce the unauthorized access on the targeted machine. The suitable set of feature selection and the correct threshold value, add an extra advantage for IDS to detect anomalies in the network. Therefore this paper discusses a new technique for selecting static threshold value from a minimum standard features in detecting fast attack from the victim perspective. In order to increase the confidence of the threshold value the result is verified using Statistical Process Control (SPC). The implementation of this approach shows that the threshold selected is suitable for identifying the fast attack in real time.Comment: 8 Pages, International Journal of Computer Science and Information Securit

    Sharing Computer Network Logs for Security and Privacy: A Motivation for New Methodologies of Anonymization

    Full text link
    Logs are one of the most fundamental resources to any security professional. It is widely recognized by the government and industry that it is both beneficial and desirable to share logs for the purpose of security research. However, the sharing is not happening or not to the degree or magnitude that is desired. Organizations are reluctant to share logs because of the risk of exposing sensitive information to potential attackers. We believe this reluctance remains high because current anonymization techniques are weak and one-size-fits-all--or better put, one size tries to fit all. We must develop standards and make anonymization available at varying levels, striking a balance between privacy and utility. Organizations have different needs and trust other organizations to different degrees. They must be able to map multiple anonymization levels with defined risks to the trust levels they share with (would-be) receivers. It is not until there are industry standards for multiple levels of anonymization that we will be able to move forward and achieve the goal of widespread sharing of logs for security researchers.Comment: 17 pages, 1 figur
    corecore