822 research outputs found

    Command & Control: Understanding, Denying and Detecting - A review of malware C2 techniques, detection and defences

    Full text link
    In this survey, we first briefly review the current state of cyber attacks, highlighting significant recent changes in how and why such attacks are performed. We then investigate the mechanics of malware command and control (C2) establishment: we provide a comprehensive review of the techniques used by attackers to set up such a channel and to hide its presence from the attacked parties and the security tools they use. We then switch to the defensive side of the problem, and review approaches that have been proposed for the detection and disruption of C2 channels. We also map such techniques to widely-adopted security controls, emphasizing gaps or limitations (and success stories) in current best practices.Comment: Work commissioned by CPNI, available at c2report.org. 38 pages. Listing abstract compressed from version appearing in repor

    Inferring malicious network events in commercial ISP networks using traffic summarisation

    Get PDF
    With the recent increases in bandwidth available to home users, traffic rates for commercial national networks have also been increasing rapidly. This presents a problem for any network monitoring tool as the traffic rate they are expected to monitor is rising on a monthly basis. Security within these networks is para- mount as they are now an accepted home of trade and commerce. Core networks have been demonstrably and repeatedly open to attack; these events have had significant material costs to high profile targets. Network monitoring is an important part of network security, providing in- formation about potential security breaches and in understanding their impact. Monitoring at high data rates is a significant problem; both in terms of processing the information at line rates, and in terms of presenting the relevant information to the appropriate persons or systems. This thesis suggests that the use of summary statistics, gathered over a num- ber of packets, is a sensible and effective way of coping with high data rates. A methodology for discovering which metrics are appropriate for classifying signi- ficant network events using statistical summaries is presented. It is shown that the statistical measures found with this methodology can be used effectively as a metric for defining periods of significant anomaly, and further classifying these anomalies as legitimate or otherwise. In a laboratory environment, these metrics were used to detect DoS traffic representing as little as 0.1% of the overall network traffic. The metrics discovered were then analysed to demonstrate that they are ap- propriate and rational metrics for the detection of network level anomalies. These metrics were shown to have distinctive characteristics during DoS by the analysis of live network observations taken during DoS events. This work was implemented and operated within a live system, at multiple sites within the core of a commercial ISP network. The statistical summaries are generated at city based points of presence and gathered centrally to allow for spacial and topological correlation of security events. The architecture chosen was shown to be exible in its application. The system was used to detect the level of VoIP traffic present on the network through the implementation of packet size distribution analysis in a multi-gigabit environment. It was also used to detect unsolicited SMTP generators injecting messages into the core. ii Monitoring in a commercial network environment is subject to data protec- tion legislation. Accordingly the system presented processed only network and transport layer headers, all other data being discarded at the capture interface. The system described in this thesis was operational for a period of 6 months, during which a set of over 140 network anomalies, both malicious and benign were observed over a range of localities. The system design, example anomalies and metric analysis form the majority of this thesis

    Adaptive Response System for Distributed Denial-of-Service Attacks

    No full text
    The continued prevalence and severe damaging effects of the Distributed Denial of Service (DDoS) attacks in today’s Internet raise growing security concerns and call for an immediate response to come up with better solutions to tackle DDoS attacks. The current DDoS prevention mechanisms are usually inflexible and determined attackers with knowledge of these mechanisms, could work around them. Most existing detection and response mechanisms are standalone systems which do not rely on adaptive updates to mitigate attacks. As different responses vary in their “leniency” in treating detected attack traffic, there is a need for an Adaptive Response System. We designed and implemented our DDoS Adaptive ResponsE (DARE) System, which is a distributed DDoS mitigation system capable of executing appropriate detection and mitigation responses automatically and adaptively according to the attacks. It supports easy integrations for both signature-based and anomaly-based detection modules. Additionally, the design of DARE’s individual components takes into consideration the strengths and weaknesses of existing defence mechanisms, and the characteristics and possible future mutations of DDoS attacks. These components consist of an Enhanced TCP SYN Attack Detector and Bloom-based Filter, a DDoS Flooding Attack Detector and Flow Identifier, and a Non Intrusive IP Traceback mechanism. The components work together interactively to adapt the detections and responses in accordance to the attack types. Experiments conducted on DARE show that the attack detection and mitigation are successfully completed within seconds, with about 60% to 86% of the attack traffic being dropped, while availability for legitimate and new legitimate requests is maintained. DARE is able to detect and trigger appropriate responses in accordance to the attacks being launched with high accuracy, effectiveness and efficiency. We also designed and implemented a Traffic Redirection Attack Protection System (TRAPS), a stand-alone DDoS attack detection and mitigation system for IPv6 networks. In TRAPS, the victim under attack verifies the authenticity of the source by performing virtual relocations to differentiate the legitimate traffic from the attack traffic. TRAPS requires minimal deployment effort and does not require modifications to the Internet infrastructure due to its incorporation of the Mobile IPv6 protocol. Experiments to test the feasibility of TRAPS were carried out in a testbed environment to verify that it would work with the existing Mobile IPv6 implementation. It was observed that the operations of each module were functioning correctly and TRAPS was able to successfully mitigate an attack launched with spoofed source IP addresses

    Security Analysis and Improvement Model for Web-based Applications

    Get PDF
    Today the web has become a major conduit for information. As the World Wide Web?s popularity continues to increase, information security on the web has become an increasing concern. Web information security is related to availability, confidentiality, and data integrity. According to the reports from http://www.securityfocus.com in May 2006, operating systems account for 9% vulnerability, web-based software systems account for 61% vulnerability, and other applications account for 30% vulnerability. In this dissertation, I present a security analysis model using the Markov Process Model. Risk analysis is conducted using fuzzy logic method and information entropy theory. In a web-based application system, security risk is most related to the current states in software systems and hardware systems, and independent of web application system states in the past. Therefore, the web-based applications can be approximately modeled by the Markov Process Model. The web-based applications can be conceptually expressed in the discrete states of (web_client_good; web_server_good, web_server_vulnerable, web_server_attacked, web_server_security_failed; database_server_good, database_server_vulnerable, database_server_attacked, database_server_security_failed) as state space in the Markov Chain. The vulnerable behavior and system response in the web-based applications are analyzed in this dissertation. The analyses focus on functional availability-related aspects: the probability of reaching a particular security failed state and the mean time to the security failure of a system. Vulnerability risk index is classified in three levels as an indicator of the level of security (low level, high level, and failed level). An illustrative application example is provided. As the second objective of this dissertation, I propose a security improvement model for the web-based applications using the GeoIP services in the formal methods. In the security improvement model, web access is authenticated in role-based access control using user logins, remote IP addresses, and physical locations as subject credentials to combine with the requested objects and privilege modes. Access control algorithms are developed for subjects, objects, and access privileges. A secure implementation architecture is presented. In summary, the dissertation has developed security analysis and improvement model for the web-based application. Future work will address Markov Process Model validation when security data collection becomes easy. Security improvement model will be evaluated in performance aspect

    Mitigating Denial-of-Service Attacks on VoIP Environment

    Get PDF
    IP telephony refers to the use of Internet protocols to provide voice, video, and data in one integrated service over LANs, BNs, MANs, not WANs. VoIP provides three key benefits compared to traditional voice telephone services. First, it minimizes the need fro extra wiring in new buildings. Second, it provides easy movement of telephones and the ability of phone numbers to move with the individual. Finally, VoIP is generally cheaper to operate because it requires less network capacity to transmit the same voice telephone call over an increasingly digital telephone network (FitzGerald & Dennis, 2007 p. 519). Unfortunately, benefits of new electronic communications come with proportionate risks. Companies experience losses resulting from attacks on data networks. There are direct losses like economic theft, theft of trade secrets and digital data, as well as indirect losses that include loss of sales, loss of competitive advantage etc. The companies need to develop their security policies to protect their businesses. But the practice of information security has become more complex than ever. The research paper will be about the major DoS threats the company’s VoIP environment can experience as well as best countermeasures that can be used to prevent them and make the VoIP environment and, therefore, company’s networking environment more secure

    Not-a-Bot (NAB): Improving Service Availability in the Face of Botnet Attacks

    Get PDF
    A large fraction of email spam, distributed denial-of-service (DDoS) attacks, and click-fraud on web advertisements are caused by traffic sent from compromised machines that form botnets. This paper posits that by identifying human-generated traffic as such, one can service it with improved reliability or higher priority, mitigating the effects of botnet attacks. The key challenge is to identify human-generated traffic in the absence of strong unique identities. We develop NAB (``Not-A-Bot''), a system to approximately identify and certify human-generated activity. NAB uses a small trusted software component called an attester, which runs on the client machine with an untrusted OS and applications. The attester tags each request with an attestation if the request is made within a small amount of time of legitimate keyboard or mouse activity. The remote entity serving the request sends the request and attestation to a verifier, which checks the attestation and implements an application-specific policy for attested requests. Our implementation of the attester is within the Xen hypervisor. By analyzing traces of keyboard and mouse activity from 328 users at Intel, together with adversarial traces of spam, DDoS, and click-fraud activity, we estimate that NAB reduces the amount of spam that currently passes through a tuned spam filter by more than 92%, while not flagging any legitimate email as spam. NAB delivers similar benefits to legitimate requests under DDoS and click-fraud attacks

    Resilience Strategies for Network Challenge Detection, Identification and Remediation

    Get PDF
    The enormous growth of the Internet and its use in everyday life make it an attractive target for malicious users. As the network becomes more complex and sophisticated it becomes more vulnerable to attack. There is a pressing need for the future internet to be resilient, manageable and secure. Our research is on distributed challenge detection and is part of the EU Resumenet Project (Resilience and Survivability for Future Networking: Framework, Mechanisms and Experimental Evaluation). It aims to make networks more resilient to a wide range of challenges including malicious attacks, misconfiguration, faults, and operational overloads. Resilience means the ability of the network to provide an acceptable level of service in the face of significant challenges; it is a superset of commonly used definitions for survivability, dependability, and fault tolerance. Our proposed resilience strategy could detect a challenge situation by identifying an occurrence and impact in real time, then initiating appropriate remedial action. Action is autonomously taken to continue operations as much as possible and to mitigate the damage, and allowing an acceptable level of service to be maintained. The contribution of our work is the ability to mitigate a challenge as early as possible and rapidly detect its root cause. Also our proposed multi-stage policy based challenge detection system identifies both the existing and unforeseen challenges. This has been studied and demonstrated with an unknown worm attack. Our multi stage approach reduces the computation complexity compared to the traditional single stage, where one particular managed object is responsible for all the functions. The approach we propose in this thesis has the flexibility, scalability, adaptability, reproducibility and extensibility needed to assist in the identification and remediation of many future network challenges

    Framework for botnet emulation and analysis

    Get PDF
    Criminals use the anonymity and pervasiveness of the Internet to commit fraud, extortion, and theft. Botnets are used as the primary tool for this criminal activity. Botnets allow criminals to accumulate and covertly control multiple Internet-connected computers. They use this network of controlled computers to flood networks with traffic from multiple sources, send spam, spread infection, spy on users, commit click fraud, run adware, and host phishing sites. This presents serious privacy risks and financial burdens to businesses and individuals. Furthermore, all indicators show that the problem is worsening because the research and development cycle of the criminal industry is faster than that of security research. To enable researchers to measure botnet connection models and counter-measures, a flexible, rapidly augmentable framework for creating test botnets is provided. This botnet framework, written in the Ruby language, enables researchers to run a botnet on a closed network and to rapidly implement new communication, spreading, control, and attack mechanisms for study. This is a significant improvement over augmenting C++ code-bases for the most popular botnets, Agobot and SDBot. Rubot allows researchers to implement new threats and their corresponding defenses before the criminal industry can. The Rubot experiment framework includes models for some of the latest trends in botnet operation such as peer-to-peer based control, fast-flux DNS, and periodic updates. Our approach implements the key network features from existing botnets and provides the required infrastructure to run the botnet in a closed environment.Ph.D.Committee Chair: Copeland, John; Committee Member: Durgin, Gregory; Committee Member: Goodman, Seymour; Committee Member: Owen, Henry; Committee Member: Riley, Georg

    Web attack risk awareness with lessons learned from high interaction honeypots

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2009Com a evolução da web 2.0, a maioria das empresas elabora negócios através da Internet usando aplicações web. Estas aplicações detêm dados importantes com requisitos cruciais como confidencialidade, integridade e disponibilidade. A perda destas propriedades influencia directamente o negócio colocando-o em risco. A percepção de risco providencia o necessário conhecimento de modo a agir para a sua mitigação. Nesta tese foi concretizada uma colecção de honeypots web de alta interacção utilizando diversas aplicações e sistemas operativos para analisar o comportamento do atacante. A utilização de ambientes de virtualização assim como ferramentas de monitorização de honeypots amplamente utilizadas providencia a informação forense necessária para ajudar a comunidade de investigação no estudo do modus operandi do atacante, armazenando os últimos exploits e ferramentas maliciosas, e a desenvolver as necessárias medidas de protecção que lidam com a maioria das técnicas de ataque. Utilizando a informação detalhada de ataque obtida com os honeypots web, o comportamento do atacante é classificado entre diferentes perfis de ataque para poderem ser analisadas as medidas de mitigação de risco que lidam com as perdas de negócio. Diferentes frameworks de segurança são analisadas para avaliar os benefícios que os conceitos básicos de segurança dos honeypots podem trazer na resposta aos requisitos de cada uma e a consequente mitigação de risco.With the evolution of web 2.0, the majority of enterprises deploy their business over the Internet using web applications. These applications carry important data with crucial requirements such as confidentiality, integrity and availability. The loss of those properties influences directly the business putting it at risk. Risk awareness provides the necessary know-how on how to act to achieve its mitigation. In this thesis a collection of high interaction web honeypots is deployed using multiple applications and diverse operating systems in order to analyse the attacker behaviour. The use of virtualization environments along with widely used honeypot monitoring tools provide the necessary forensic information that helps the research community to study the modus operandi of the attacker gathering the latest exploits and malicious tools and to develop adequate safeguards that deal with the majority of attacking techniques. Using the detailed attacking information gathered with the web honeypots, the attacking behaviour will be classified across different attacking profiles to analyse the necessary risk mitigation safeguards to deal with business losses. Different security frameworks commonly used by enterprises are analysed to evaluate the benefits of the honeypots security concepts in responding to each framework’s requirements and consequently mitigating the risk

    CHID : conditional hybrid intrusion detection system for reducing false positives and resource consumption on malicous datasets

    Get PDF
    Inspecting packets to detect intrusions faces challenges when coping with a high volume of network traffic. Packet-based detection processes every payload on the wire, which degrades the performance of network intrusion detection system (NIDS). This issue requires an introduction of a flow-based NIDS that reduces the amount of data to be processed by examining aggregated information of related packets. However, flow-based detection still suffers from the generation of the false positive alerts due to incomplete data input. This study proposed a Conditional Hybrid Intrusion Detection (CHID) by combining the flow-based with packet-based detection. In addition, it is also aimed to improve the resource consumption of the packet-based detection approach. CHID applied attribute wrapper features evaluation algorithms that marked malicious flows for further analysis by the packet-based detection. Input Framework approach was employed for triggering packet flows between the packetbased and flow-based detections. A controlled testbed experiment was conducted to evaluate the performance of detection mechanism’s CHID using datasets obtained from on different traffic rates. The result of the evaluation showed that CHID gains a significant performance improvement in terms of resource consumption and packet drop rate, compared to the default packet-based detection implementation. At a 200 Mbps, CHID in IRC-bot scenario, can reduce 50.6% of memory usage and decreases 18.1% of the CPU utilization without packets drop. CHID approach can mitigate the false positive rate of flow-based detection and reduce the resource consumption of packet-based detection while preserving detection accuracy. CHID approach can be considered as generic system to be applied for monitoring of intrusion detection systems
    corecore