35 research outputs found

    Mitigating Botnet Attack Using Encapsulated Detection Mechanism (EDM)

    Full text link
    Botnet as it is popularly called became fashionable in recent times owing to it embedded force on network servers. Botnet has an exponential growth of about 170, 000 within network server and client infrastructures per day. The networking environment on monthly basis battle over 5 million bots. Nigeria as a country loses above one hundred and twenty five (N125) billion naira to network fraud annually, end users such as Banks and other financial institutions battle daily the botnet threats.Comment: This paper addresses critical area of networ

    Improving Dependability of Networks with Penalty and Revocation Mechanisms

    Get PDF
    Both malicious and non-malicious faults can dismantle computer networks. Thus, mitigating faults at various layers is essential in ensuring efficient and fair network resource utilization. In this thesis we take a step in this direction and study several ways to deal with faults by means of penalties and revocation mechanisms in networks that are lacking a centralized coordination point, either because of their scale or design. Compromised nodes can pose a serious threat to infrastructure, end-hosts and services. Such malicious elements can undermine the availability and fairness of networked systems. To deal with such nodes, we design and analyze protocols enabling their removal from the network in a fast and a secure way. We design these protocols for two different environments. In the former setting, we assume that there are multiple, but independent trusted points in the network which coordinate other nodes in the network. In the latter, we assume that all nodes play equal roles in the network and thus need to cooperate to carry out common functionality. We analyze these solutions and discuss possible deployment scenarios. Next we turn our attention to wireless edge networks. In this context, some nodes, without being malicious, can still behave in an unfair manner. To deal with the situation, we propose several self-penalty mechanisms. We implement the proposed protocols employing a commodity hardware and conduct experiments in real-world environments. The analysis of data collected in several measurement rounds revealed improvements in terms of higher fairness and throughput. We corroborate the results with simulations and an analytic model. And finally, we discuss how to measure fairness in dynamic settings, where nodes can have heterogeneous resource demands

    DDoS Hide & Seek:On the effectiveness of a booter services takedown

    Get PDF
    Booter services continue to provide popular DDoS-as-a-service platforms and enable anyone irrespective of their technical ability, to execute DDoS attacks with devastating impact. Since booters are a serious threat to Internet operations and can cause significant financial and reputational damage, they also draw the attention of law enforcement agencies and related counter activities. In this paper, we investigate booter-based DDoS attacks in the wild and the impact of an FBI takedown targeting 15 booter websites in December 2018 from the perspective of a major IXP and two ISPs. We study and compare attack properties of multiple booter services by launching Gbps-level attacks against our own infrastructure. To understand spatial and temporal trends of the DDoS traffic originating from booters we scrutinize 5 months, worth of inter-domain traffic. We observe that the takedown only leads to a temporary reduction in attack traffic. Additionally, one booter was found to quickly continue operation by using a new domain for its website

    A Defense Framework Against Denial-of-Service in Computer Networks

    Get PDF
    Denial-of-Service (DoS) is a computer security problem that poses a serious challenge totrustworthiness of services deployed over computer networks. The aim of DoS attacks isto make services unavailable to legitimate users, and current network architectures alloweasy-to-launch, hard-to-stop DoS attacks. Particularly challenging are the service-level DoSattacks, whereby the victim service is flooded with legitimate-like requests, and the jammingattack, in which wireless communication is blocked by malicious radio interference. Theseattacks are overwhelming even for massively-resourced services, and effective and efficientdefenses are highly needed. This work contributes a novel defense framework, which I call dodging, against service-level DoS and wireless jamming. Dodging has two components: (1) the careful assignment ofservers to clients to achieve accurate and quick identification of service-level DoS attackersand (2) the continuous and unpredictable-to-attackers reconfiguration of the client-serverassignment and the radio-channel mapping to withstand service-level and jamming DoSattacks. Dodging creates hard-to-evade baits, or traps, and dilutes the attack "fire power".The traps identify the attackers when they violate the mapping function and even when theyattack while correctly following the mapping function. Moreover, dodging keeps attackers"in the dark", trying to follow the unpredictably changing mapping. They may hit a fewtimes but lose "precious" time before they are identified and stopped. Three dodging-based DoS defense algorithms are developed in this work. They are moreresource-efficient than state-of-the-art DoS detection and mitigation techniques. Honeybees combines channel hopping and error-correcting codes to achieve bandwidth-efficientand energy-efficient mitigation of jamming in multi-radio networks. In roaming honeypots, dodging enables the camouflaging of honeypots, or trap machines, as real servers,making it hard for attackers to locate and avoid the traps. Furthermore, shuffling requestsover servers opens up windows of opportunity, during which legitimate requests are serviced.Live baiting, efficiently identifies service-level DoS attackers by employing results fromthe group-testing theory, discovering defective members in a population using the minimumnumber of tests. The cost and benefit of the dodging algorithms are analyzed theoretically,in simulation, and using prototype experiments

    KeyForge: Mitigating Email Breaches with Forward-Forgeable Signatures

    Full text link
    Email breaches are commonplace, and they expose a wealth of personal, business, and political data that may have devastating consequences. The current email system allows any attacker who gains access to your email to prove the authenticity of the stolen messages to third parties -- a property arising from a necessary anti-spam / anti-spoofing protocol called DKIM. This exacerbates the problem of email breaches by greatly increasing the potential for attackers to damage the users' reputation, blackmail them, or sell the stolen information to third parties. In this paper, we introduce "non-attributable email", which guarantees that a wide class of adversaries are unable to convince any third party of the authenticity of stolen emails. We formally define non-attributability, and present two practical system proposals -- KeyForge and TimeForge -- that provably achieve non-attributability while maintaining the important protection against spam and spoofing that is currently provided by DKIM. Moreover, we implement KeyForge and demonstrate that that scheme is practical, achieving competitive verification and signing speed while also requiring 42% less bandwidth per email than RSA2048

    A structured approach to malware detection and analysis in digital forensics investigation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirement for the degree of PhDWithin the World Wide Web (WWW), malware is considered one of the most serious threats to system security with complex system issues caused by malware and spam. Networks and systems can be accessed and compromised by various types of malware, such as viruses, worms, Trojans, botnet and rootkits, which compromise systems through coordinated attacks. Malware often uses anti-forensic techniques to avoid detection and investigation. Moreover, the results of investigating such attacks are often ineffective and can create barriers for obtaining clear evidence due to the lack of sufficient tools and the immaturity of forensics methodology. This research addressed various complexities faced by investigators in the detection and analysis of malware. In this thesis, the author identified the need for a new approach towards malware detection that focuses on a robust framework, and proposed a solution based on an extensive literature review and market research analysis. The literature review focussed on the different trials and techniques in malware detection to identify the parameters for developing a solution design, while market research was carried out to understand the precise nature of the current problem. The author termed the new approaches and development of the new framework the triple-tier centralised online real-time environment (tri-CORE) malware analysis (TCMA). The tiers come from three distinctive phases of detection and analysis where the entire research pattern is divided into three different domains. The tiers are the malware acquisition function, detection and analysis, and the database operational function. This framework design will contribute to the field of computer forensics by making the investigative process more effective and efficient. By integrating a hybrid method for malware detection, associated limitations with both static and dynamic methods are eliminated. This aids forensics experts with carrying out quick, investigatory processes to detect the behaviour of the malware and its related elements. The proposed framework will help to ensure system confidentiality, integrity, availability and accountability. The current research also focussed on a prototype (artefact) that was developed in favour of a different approach in digital forensics and malware detection methods. As such, a new Toolkit was designed and implemented, which is based on a simple architectural structure and built from open source software that can help investigators develop the skills to critically respond to current cyber incidents and analyses

    EFFICIENT AND SCALABLE NETWORK SECURITY PROTOCOLS BASED ON LFSR SEQUENCES

    Get PDF
    The gap between abstract, mathematics-oriented research in cryptography and the engineering approach of designing practical, network security protocols is widening. Network researchers experiment with well-known cryptographic protocols suitable for different network models. On the other hand, researchers inclined toward theory often design cryptographic schemes without considering the practical network constraints. The goal of this dissertation is to address problems in these two challenging areas: building bridges between practical network security protocols and theoretical cryptography. This dissertation presents techniques for building performance sensitive security protocols, using primitives from linear feedback register sequences (LFSR) sequences, for a variety of challenging networking applications. The significant contributions of this thesis are: 1. A common problem faced by large-scale multicast applications, like real-time news feeds, is collecting authenticated feedback from the intended recipients. We design an efficient, scalable, and fault-tolerant technique for combining multiple signed acknowledgments into a single compact one and observe that most signatures (based on the discrete logarithm problem) used in previous protocols do not result in a scalable solution to the problem. 2. We propose a technique to authenticate on-demand source routing protocols in resource-constrained wireless mobile ad-hoc networks. We develop a single-round multisignature that requires no prior cooperation among nodes to construct the multisignature and supports authentication of cached routes. 3. We propose an efficient and scalable aggregate signature, tailored for applications like building efficient certificate chains, authenticating distributed and adaptive content management systems and securing path-vector routing protocols. 4. We observe that blind signatures could form critical building blocks of privacypreserving accountability systems, where an authority needs to vouch for the legitimacy of a message but the ownership of the message should be kept secret from the authority. We propose an efficient blind signature that can serve as a protocol building block for performance sensitive, accountability systems. All special forms digital signatures—aggregate, multi-, and blind signatures—proposed in this dissertation are the first to be constructed using LFSR sequences. Our detailed cost analysis shows that for a desired level of security, the proposed signatures outperformed existing protocols in computation cost, number of communication rounds and storage overhead

    Tracking and Mitigation of Malicious Remote Control Networks

    Full text link
    Attacks against end-users are one of the negative side effects of today’s networks. The goal of the attacker is to compromise the victim’s machine and obtain control over it. This machine is then used to carry out denial-of-service attacks, to send out spam mails, or for other nefarious purposes. From an attacker’s point of view, this kind of attack is even more efficient if she manages to compromise a large number of machines in parallel. In order to control all these machines, she establishes a "malicious remote control network", i.e., a mechanism that enables an attacker the control over a large number of compromised machines for illicit activities. The most common type of these networks observed so far are so called "botnets". Since these networks are one of the main factors behind current abuses on the Internet, we need to find novel approaches to stop them in an automated and efficient way. In this thesis we focus on this open problem and propose a general root cause methodology to stop malicious remote control networks. The basic idea of our method consists of three steps. In the first step, we use "honeypots" to collect information. A honeypot is an information system resource whose value lies in unauthorized or illicit use of that resource. This technique enables us to study current attacks on the Internet and we can for example capture samples of autonomous spreading malware ("malicious software") in an automated way. We analyze the collected data to extract information about the remote control mechanism in an automated fashion. For example, we utilize an automated binary analysis tool to find the Command & Control (C&C) server that is used to send commands to the infected machines. In the second step, we use the extracted information to infiltrate the malicious remote control networks. This can for example be implemented by impersonating as a bot and infiltrating the remote control channel. Finally, in the third step we use the information collected during the infiltration phase to mitigate the network, e.g., by shutting down the remote control channel such that the attacker cannot send commands to the compromised machines. In this thesis we show the practical feasibility of this method. We examine different kinds of malicious remote control networks and discuss how we can track all of them in an automated way. As a first example, we study botnets that use a central C&C server: We illustrate how the three steps can be implemented in practice and present empirical measurement results obtained on the Internet. Second, we investigate botnets that use a peer-to-peer based communication channel. Mitigating these botnets is harder since no central C&C server exists which could be taken offline. Nevertheless, our methodology can also be applied to this kind of networks and we present empirical measurement results substantiating our method. Third, we study fast-flux service networks. The idea behind these networks is that the attacker does not directly abuse the compromised machines, but uses them to establish a proxy network on top of these machines to enable a robust hosting infrastructure. Our method can be applied to this novel kind of malicious remote control networks and we present empirical results supporting this claim. We anticipate that the methodology proposed in this thesis can also be used to track and mitigate other kinds of malicious remote control networks

    A framework for the application of network telescope sensors in a global IP network

    Get PDF
    The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security system
    corecore