1,573 research outputs found

    Command & Control: Understanding, Denying and Detecting - A review of malware C2 techniques, detection and defences

    Full text link
    In this survey, we first briefly review the current state of cyber attacks, highlighting significant recent changes in how and why such attacks are performed. We then investigate the mechanics of malware command and control (C2) establishment: we provide a comprehensive review of the techniques used by attackers to set up such a channel and to hide its presence from the attacked parties and the security tools they use. We then switch to the defensive side of the problem, and review approaches that have been proposed for the detection and disruption of C2 channels. We also map such techniques to widely-adopted security controls, emphasizing gaps or limitations (and success stories) in current best practices.Comment: Work commissioned by CPNI, available at c2report.org. 38 pages. Listing abstract compressed from version appearing in repor

    A Study of Very Short Intermittent DDoS Attacks on the Performance of Web Services in Clouds

    Get PDF
    Distributed Denial-of-Service (DDoS) attacks for web applications such as e-commerce are increasing in size, scale, and frequency. The emerging elastic cloud computing cannot defend against ever-evolving new types of DDoS attacks, since they exploit various newly discovered network or system vulnerabilities even in the cloud platform, bypassing not only the state-of-the-art defense mechanisms but also the elasticity mechanisms of cloud computing. In this dissertation, we focus on a new type of low-volume DDoS attack, Very Short Intermittent DDoS Attacks, which can hurt the performance of web applications deployed in the cloud via transiently saturating the critical bottleneck resource of the target systems by means of external attack HTTP requests outside the cloud or internal resource contention inside the cloud. We have explored external attacks by modeling the n-tier web applications with queuing network theory and implementing the attacking framework based-on feedback control theory. We have explored internal attacks by investigating and exploiting resource contention and performance interference to locate a target VM (virtual machine) and degrade its performance

    Analysis and Classification of Current Trends in Malicious HTTP Traffic

    Get PDF
    Web applications are highly prone to coding imperfections which lead to hacker-exploitable vulnerabilities. The contribution of this thesis includes detailed analysis of malicious HTTP traffic based on data collected from four advertised high-interaction honeypots, which hosted different Web applications, each in duration of almost four months. We extract features from Web server logs that characterize malicious HTTP sessions in order to present them as data vectors in four fully labeled datasets. Our results show that the supervised learning methods, Support Vector Machines (SVM) and Decision Trees based J48 and PART, can be used to efficiently distinguish attack sessions from vulnerability scan sessions, as well as efficiently classify twenty-two different types of malicious activities with high probability of detection and very low probability of false alarms for most cases. Furthermore, feature selection methods can be used to select important features in order to improve the computational complexity of the learners

    Adversarial behaviours knowledge area

    Full text link
    The technological advancements witnessed by our society in recent decades have brought improvements in our quality of life, but they have also created a number of opportunities for attackers to cause harm. Before the Internet revolution, most crime and malicious activity generally required a victim and a perpetrator to come into physical contact, and this limited the reach that malicious parties had. Technology has removed the need for physical contact to perform many types of crime, and now attackers can reach victims anywhere in the world, as long as they are connected to the Internet. This has revolutionised the characteristics of crime and warfare, allowing operations that would not have been possible before. In this document, we provide an overview of the malicious operations that are happening on the Internet today. We first provide a taxonomy of malicious activities based on the attacker’s motivations and capabilities, and then move on to the technological and human elements that adversaries require to run a successful operation. We then discuss a number of frameworks that have been proposed to model malicious operations. Since adversarial behaviours are not a purely technical topic, we draw from research in a number of fields (computer science, criminology, war studies). While doing this, we discuss how these frameworks can be used by researchers and practitioners to develop effective mitigations against malicious online operations.Published versio

    Security Enhanced Applications for Information Systems

    Get PDF
    Every day, more users access services and electronically transmit information which is usually disseminated over insecure networks and processed by websites and databases, which lack proper security protection mechanisms and tools. This may have an impact on both the users’ trust as well as the reputation of the system’s stakeholders. Designing and implementing security enhanced systems is of vital importance. Therefore, this book aims to present a number of innovative security enhanced applications. It is titled “Security Enhanced Applications for Information Systems” and includes 11 chapters. This book is a quality guide for teaching purposes as well as for young researchers since it presents leading innovative contributions on security enhanced applications on various Information Systems. It involves cases based on the standalone, network and Cloud environments

    Feature modeling and cluster analysis of malicious Web traffic

    Get PDF
    Many attackers find Web applications to be attractive targets since they are widely used and have many vulnerabilities to exploit. The goal of this thesis is to study patterns of attacker activities on typical Web based systems using four data sets collected by honeypots, each in duration of almost four months. The contributions of our work include cluster analysis and modeling the features of the malicious Web traffic. Some of our main conclusions are: (1) Features of malicious sessions, such as Number of Requests, Bytes Transferred, and Duration, follow skewed distributions, including heavy-tailed. (2) Number of requests per unique attacker follows skewed distributions, including heavy-tailed, with a small number of attackers submitting most of the malicious traffic. (3) Cluster analysis provides an efficient way to distinguish between attack sessions and vulnerability scan sessions

    Analysis of attacks on Web based applications

    Get PDF
    As the technology used to power Web-based applications continues to evolve, new security threats are emerging. Web 2.0 technology provides attackers with a whole new array of vulnerabilities to exploit. In this thesis, we present an analysis of the attacker activity aimed at a typical Web server based on the data collected on two high interaction honeypots over a one month period of time. The configuration of the honeypots resembles the typical three tier architecture of many real world Web servers. Our honeypots ran on the Windows XP operating system and featured attractive attack targets such as the Microsoft IIS Web server, MySQL database, and two Web 2.0-based applications (Wordpress and MediaWiki). This configuration allows for attacks on a component directly or through the other components. Our analysis includes detailed inspection of the network traffic and IIS logs as well as investigation of the System logs, where appropriate. We also develop a pattern recognition approach to classify TCP connections as port scans or vulnerability scans/attacks. Some of the conclusions of our analysis include: (1) the vast majority of malicious traffic was over the TCP protocol, (2) the majority of malicious traffic was targeted at Windows file sharing, HTTP, and SSH ports, (3) most attackers found our Web server through search-based strategies rather than IP-based strategies, (4) most of the malicious traffic was generated by a few unique attackers

    Cyber Third-Party Risk Management: A Comparison of Non-Intrusive Risk Scoring Reports

    Get PDF
    Cybersecurity is a concern for organizations in this era. However, strengthening the security of an organization’s internal network may not be sufficient since modern organizations depend on third parties, and these dependencies may open new attack paths to cybercriminals. Cyber Third-Party Risk Management (C-TPRM) is a relatively new concept in the business world. All vendors or partners possess a potential security vulnerability and threat. Even if an organization has the best cybersecurity practice, its data, customers, and reputation may be at risk because of a third party. Organizations seek effective and efficient methods to assess their partners’ cybersecurity risks. In addition to intrusive methods to assess an organization’s cybersecurity risks, such as penetration testing, non-intrusive methods are emerging to conduct C-TPRM more easily by synthesizing the publicly available information without requiring any involvement of the subject organization. In this study, the existing methods for C-TPRM built by different companies are presented and compared to discover the commonly used indicators and criteria for the assessments. Additionally, the results of different methods assessing the cybersecurity risks of a specific organization were compared to examine reliability and consistency. The results showed that even if there is a similarity among the results, the provided security scores do not entirely converge

    It bends but would it break?:topological analysis of BGP infrastructures in Europe

    Get PDF
    The Internet is often thought to be a model of resilience, due to a decentralised, organically-grown architecture. This paper puts this perception into perspective through the results of a security analysis of the Border Gateway Protocol (BGP) routing infrastructure. BGP is a fundamental Internet protocol and its intrinsic fragilities have been highlighted extensively in the literature. A seldom studied aspect is how robust the BGP infrastructure actually is as a result of nearly three decades of perpetual growth. Although global black-outs seem unlikely, local security events raise growing concerns on the robustness of the backbone. In order to better protect this critical infrastructure, it is crucial to understand its topology in the context of the weaknesses of BGP and to identify possible security scenarios. Firstly, we establish a comprehensive threat model that classifies main attack vectors, including but non limited to BGP vulnerabilities. We then construct maps of the European BGP backbone based on publicly available routing data. We analyse the topology of the backbone and establish several disruption scenarios that highlight the possible consequences of different types of attacks, for different attack capabilities. We also discuss existing mitigation and recovery strategies, and we propose improvements to enhance the robustness and resilience of the backbone. To our knowledge, this study is the first to combine a comprehensive threat analysis of BGP infrastructures withadvanced network topology considerations. We find that the BGP infrastructure is at higher risk than already understood, due to topologies that remain vulnerable to certain targeted attacks as a result of organic deployment over the years. Significant parts of the system are still uncharted territory, which warrants further investigation in this direction
    • …
    corecore