909 research outputs found

    Cyber Threat Intelligence Model: An Evaluation of Taxonomies, Sharing Standards, and Ontologies within Cyber Threat Intelligence

    Full text link
    Cyber threat intelligence is the provision of evidence-based knowledge about existing or emerging threats. Benefits of threat intelligence include increased situational awareness and efficiency in security operations and improved prevention, detection, and response capabilities. To process, analyze, and correlate vast amounts of threat information and derive highly contextual intelligence that can be shared and consumed in meaningful times requires utilizing machine-understandable knowledge representation formats that embed the industry-required expressivity and are unambiguous. To a large extend, this is achieved by technologies like ontologies, interoperability schemas, and taxonomies. This research evaluates existing cyber-threat-intelligence-relevant ontologies, sharing standards, and taxonomies for the purpose of measuring their high-level conceptual expressivity with regards to the who, what, why, where, when, and how elements of an adversarial attack in addition to courses of action and technical indicators. The results confirmed that little emphasis has been given to developing a comprehensive cyber threat intelligence ontology with existing efforts not being thoroughly designed, non-interoperable and ambiguous, and lacking semantic reasoning capability

    Cybersecurity knowledge graphs

    Get PDF
    Cybersecurity knowledge graphs, which represent cyber-knowledge with a graph-based data model, provide holistic approaches for processing massive volumes of complex cybersecurity data derived from diverse sources. They can assist security analysts to obtain cyberthreat intelligence, achieve a high level of cyber-situational awareness, discover new cyber-knowledge, visualize networks, data flow, and attack paths, and understand data correlations by aggregating and fusing data. This paper reviews the most prominent graph-based data models used in this domain, along with knowledge organization systems that define concepts and properties utilized in formal cyber-knowledge representation for both background knowledge and specific expert knowledge about an actual system or attack. It is also discussed how cybersecurity knowledge graphs enable machine learning and facilitate automated reasoning over cyber-knowledge

    Stargazer: Long-Term and Multiregional Measurement of Timing/ Geolocation-Based Cloaking

    Get PDF
    Malicious hosts have come to play a significant and varied role in today's cyber attacks. Some of these hosts are equipped with a technique called cloaking, which discriminates between access from potential victims and others and then returns malicious content only to potential victims. This is a serious threat because it can evade detection by security vendors and researchers and cause serious damage. As such, cloaking is being extensively investigated, especially for phishing sites. We are currently engaged in a long-term cloaking study of a broader range of threats. In the present study, we implemented Stargazer, which actively monitors malicious hosts and detects geographic and temporal cloaking, and collected 30,359,410 observations between November 2019 and February 2022 for 18,397 targets from 13 sites where our sensors are installed. Our analysis confirmed that cloaking techniques are widely abused, i.e., not only in the context of specific threats such as phishing. This includes geographic and time-based cloaking, which is difficult to detect with single-site or one-shot observations. Furthermore, we found that malicious hosts that perform cloaking include those that survive for relatively long periods of time, and those whose contents are not present in VirusTotal. This suggests that it is not easy to observe and analyze the cloaking malicious hosts with existing technologies. The results of this study have deepened our understanding of various types of cloaking, including geographic and temporal ones, and will help in the development of future cloaking detection methods

    A Machine Learning based Empirical Evaluation of Cyber Threat Actors High Level Attack Patterns over Low level Attack Patterns in Attributing Attacks

    Full text link
    Cyber threat attribution is the process of identifying the actor of an attack incident in cyberspace. An accurate and timely threat attribution plays an important role in deterring future attacks by applying appropriate and timely defense mechanisms. Manual analysis of attack patterns gathered by honeypot deployments, intrusion detection systems, firewalls, and via trace-back procedures is still the preferred method of security analysts for cyber threat attribution. Such attack patterns are low-level Indicators of Compromise (IOC). They represent Tactics, Techniques, Procedures (TTP), and software tools used by the adversaries in their campaigns. The adversaries rarely re-use them. They can also be manipulated, resulting in false and unfair attribution. To empirically evaluate and compare the effectiveness of both kinds of IOC, there are two problems that need to be addressed. The first problem is that in recent research works, the ineffectiveness of low-level IOC for cyber threat attribution has been discussed intuitively. An empirical evaluation for the measure of the effectiveness of low-level IOC based on a real-world dataset is missing. The second problem is that the available dataset for high-level IOC has a single instance for each predictive class label that cannot be used directly for training machine learning models. To address these problems in this research work, we empirically evaluate the effectiveness of low-level IOC based on a real-world dataset that is specifically built for comparative analysis with high-level IOC. The experimental results show that the high-level IOC trained models effectively attribute cyberattacks with an accuracy of 95% as compared to the low-level IOC trained models where accuracy is 40%.Comment: 20 page

    Survey of Machine Learning Techniques for Malware Analysis

    Get PDF
    Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies for keeping pace with the speed of development of novel malware. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis. We systematize surveyed papers according to their objectives (i.e., the expected output, what the analysis aims to), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of problems concerning the datasets used in considered works, and finally introduce the novel concept of malware analysis economics, regarding the study of existing tradeoffs among key metrics, such as analysis accuracy and economical costs

    On the Use of Machine Learning for Identifying Botnet Network Traffic

    Get PDF
    • …
    corecore