260 research outputs found

    Detection of Denial of Service (DoS) Attacks in Local Area Networks Based on Outgoing Packets

    Get PDF
    Denial of Service (DoS) is a security threat which compromises the confidentiality of information stored in Local Area Networks (LANs) due to unauthorized access by spoofed IP addresses. DoS is harmful to LANs as the flooding of packets may delay other users from accessing the server and in severe cases, the server may need to be shut down, wasting valuable resources, especially in critical real-time services such as in e-commerce and the medical field. The objective of this project is to propose a new DoS detection system to protect organizations from unauthenticated access to important information which may jeopardize the confidentiality, privacy and integrity of information in Local Area Networks. The new DoS detection system monitors the traffic flow of packets and filters the packets based on their IP addresses to determine whether they are genuine requests for network services or DoS attacks. Results obtained demonstrate that the detection accuracy of the new DoS detection system was in good agreement with the detection accuracy from the network protocol analyzer, Wireshark. For high-rate DoS attacks, the accuracy was 100% whereas for low-rate DoS attacks, the accuracy was 67%

    The Cracker Patch Choice: An Analysis of Post Hoc Security Techniques

    Get PDF
    It has long been known that security is easiest to achieve when it is designed in from the start. Unfortunately, it has also become evident that systems built with security as a priority are rarely selected for wide spread deployment, because most consumers choose features, convenience, and performance over security. Thus security officers are often denied the option of choosing a truly secure solution, and instead must choose among a variety of post hoc security adaptations. We classify security enhancing methods, and compare and contrast these methods in terms of their effectiveness vs. cost of deployment. Our analysis provides practitioners with a guide for when to develop and deploy various kinds of post hoc security adaptations

    Study of Botnets and Their Threats to Internet Security

    Get PDF
    Among all media of communications, Internet is most vulnerable to attacks owing to its public nature and virtually without centralized control. With the growing financial dealings and dependence of businesses on Internet, these attacks have even more increased. Whereas previously hackers would satisfy themselves by breaking into someone’s system, in today\u27s world hackers\u27 work under an organized crime plan to obtain illicit financial gains. Various attacks than include spamming, phishing, click fraud, distributed denial of services, hosting illegal material, key logging, etc. are being carried out by hackers using botnets. In this paper a detailed study of botnets vis-a-vis their creation, propagation, command and control techniques, communication protocols and relay mechanism is presented. The aim of this paper is to gain an insight of security threats that users of Internet are facing from hackers by the use of malicious botnets

    Controlling High Bandwidth Aggregates in the Network

    Get PDF
    The current Internet infrastructure has very few built-in protection mechanisms, and is therefore vulnerable to attacks and failures. In particular, recent events have illustrated the Internet's vulnerability to both denial of service (DoS) attacks and flash crowds in which one or more links in the network (or servers at the edge of the network) become severely congested. In both DoS attacks and flash crowds the congestion is due neither to a single flow, nor to a general increase in traffic, but to a well-defined subset of the traffic --- an aggregate. This paper proposes mechanisms for detecting and controlling such high bandwidth aggregates. Our design involves both a local mechanism for detecting and controlling an aggregate at a single router, and a cooperative pushback mechanism in which a router can ask upstream routers to control an aggregate. While certainly not a panacea, these mechanisms could provide some needed relief from flash crowds and flooding-style DoS attacks. The presentation in this paper is a first step towards a more rigorous evaluation of these mechanisms

    Protecting web servers from distributed denial of service attack

    Get PDF
    This thesis developed a novel architecture and adaptive methods to detect and block Distributed Denial of Service attacks with minimal punishment to legitimate users. A real time scoring algorithm differentiated attackers from legitimate users. This architecture reduces the power consumption of a web server farm thus reducing the carbon footprint

    Improvement of DDoS attack detection and web access anonymity

    Full text link
    The thesis has covered a range of algorithms that help to improve the security of web services. The research focused on the problems of DDoS attack and traffic analysis attack against service availability and information privacy respectively. Finally, this research significantly advantaged DDoS attack detection and web access anonymity.<br /

    Application Adaptive Bandwidth Management Using Real-Time Network Monitoring.

    Get PDF
    Application adaptive bandwidth management is a strategy for ensuring secure and reliable network operation in the presence of undesirable applications competing for a network’s crucial bandwidth, covert channels of communication via non-standard traffic on well-known ports, and coordinated Denial of Service attacks. The study undertaken here explored the classification, analysis and management of the network traffic on the basis of ports and protocols used, type of applications, traffic direction and flow rates on the East Tennessee State University’s campus-wide network. Bandwidth measurements over a nine-month period indicated bandwidth abuse of less than 0.0001% of total network bandwidth. The conclusion suggests the use of the defense-in-depth approach in conjunction with the KHYATI (Knowledge, Host hardening, Yauld monitoring, Analysis, Tools and Implementation) paradigm to ensure effective information assurance

    Performance Metrics for Network Intrusion Systems

    Get PDF
    Intrusion systems have been the subject of considerable research during the past 33 years, since the original work of Anderson. Much has been published attempting to improve their performance using advanced data processing techniques including neural nets, statistical pattern recognition and genetic algorithms. Whilst some significant improvements have been achieved they are often the result of assumptions that are difficult to justify and comparing performance between different research groups is difficult. The thesis develops a new approach to defining performance focussed on comparing intrusion systems and technologies. A new taxonomy is proposed in which the type of output and the data scale over which an intrusion system operates is used for classification. The inconsistencies and inadequacies of existing definitions of detection are examined and five new intrusion levels are proposed from analogy with other detection-based technologies. These levels are known as detection, recognition, identification, confirmation and prosecution, each representing an increase in the information output from, and functionality of, the intrusion system. These levels are contrasted over four physical data scales, from application/host through to enterprise networks, introducing and developing the concept of a footprint as a pictorial representation of the scope of an intrusion system. An intrusion is now defined as “an activity that leads to the violation of the security policy of a computer system”. Five different intrusion technologies are illustrated using the footprint with current challenges also shown to stimulate further research. Integrity in the presence of mixed trust data streams at the highest intrusion level is identified as particularly challenging. Two metrics new to intrusion systems are defined to quantify performance and further aid comparison. Sensitivity is introduced to define basic detectability of an attack in terms of a single parameter, rather than the usual four currently in use. Selectivity is used to describe the ability of an intrusion system to discriminate between attack types. These metrics are quantified experimentally for network intrusion using the DARPA 1999 dataset and SNORT. Only nine of the 58 attack types present were detected with sensitivities in excess of 12dB indicating that detection performance of the attack types present in this dataset remains a challenge. The measured selectivity was also poor indicting that only three of the attack types could be confidently distinguished. The highest value of selectivity was 3.52, significantly lower than the theoretical limit of 5.83 for the evaluated system. Options for improving selectivity and sensitivity through additional measurements are examined.Stochastic Systems Lt

    Improving end-to-end availability using overlay networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.Includes bibliographical references (p. 139-150).The end-to-end availability of Internet services is between two and three orders of magnitude worse than other important engineered systems, including the US airline system, the 911 emergency response system, and the US public telephone system. This dissertation explores three systems designed to mask Internet failures, and, through a study of three years of data collected on a 31-site testbed, why these failures happen and how effectively they can be masked. A core aspect of many of the failures that interrupt end-to-end communication is that they fall outside the expected domain of well-behaved network failures. Many traditional techniques cope with link and router failures; as a result, the remaining failures are those caused by software and hardware bugs, misconfiguration, malice, or the inability of current routing systems to cope with persistent congestion.The effects of these failures are exacerbated because Internet services depend upon the proper functioning of many components-wide-area routing, access links, the domain name system, and the servers themselves-and a failure in any of them can prove disastrous to the proper functioning of the service. This dissertation describes three complementary systems to increase Internet availability in the face of such failures. Each system builds upon the idea of an overlay network, a network created dynamically between a group of cooperating Internet hosts. The first two systems, Resilient Overlay Networks (RON) and Multi-homed Overlay Networks (MONET) determine whether the Internet path between two hosts is working on an end-to-end basis. Both systems exploit the considerable redundancy available in the underlying Internet to find failure-disjoint paths between nodes, and forward traffic along a working path. RON is able to avoid 50% of the Internet outages that interrupt communication between a small group of communicating nodes.MONET is more aggressive, combining an overlay network of Web proxies with explicitly engineered redundant links to the Internet to also mask client access link failures. Eighteen months of measurements from a six-site deployment of MONET show that it increases a client's ability to access working Web sites by nearly an order of magnitude. Where RON and MONET combat accidental failures, the Mayday system guards against denial- of-service attacks by surrounding a vulnerable Internet server with a ring of filtering routers. Mayday then uses a set of overlay nodes to act as mediators between the service and its clients, permitting only properly authenticated traffic to reach the server.by David Godbe Andersen.Ph.D
    • …
    corecore