547 research outputs found

    Artificial intelligence in the cyber domain: Offense and defense

    Get PDF
    Artificial intelligence techniques have grown rapidly in recent years, and their applications in practice can be seen in many fields, ranging from facial recognition to image analysis. In the cybersecurity domain, AI-based techniques can provide better cyber defense tools and help adversaries improve methods of attack. However, malicious actors are aware of the new prospects too and will probably attempt to use them for nefarious purposes. This survey paper aims at providing an overview of how artificial intelligence can be used in the context of cybersecurity in both offense and defense.Web of Science123art. no. 41

    Efficient Collective Action for Tackling Time-Critical Cybersecurity Threats

    Full text link
    The latency reduction between the discovery of vulnerabilities, the build-up and dissemination of cyber-attacks has put significant pressure on cybersecurity professionals. For that, security researchers have increasingly resorted to collective action in order to reduce the time needed to characterize and tame outstanding threats. Here, we investigate how joining and contributions dynamics on MISP, an open source threat intelligence sharing platform, influence the time needed to collectively complete threat descriptions. We find that performance, defined as the capacity to characterize quickly a threat event, is influenced by (i) its own complexity (negatively), by (ii) collective action (positively), and by (iii) learning, information integration and modularity (positively). Our results inform on how collective action can be organized at scale and in a modular way to overcome a large number of time-critical tasks, such as cybersecurity threats.Comment: 23 pages, 3 figures. Presented at the 21st Workshop on the Economics of Information Security (WEIS), 2022, Tulsa, US

    A compression-based method for detecting anomalies in textual data

    Full text link
    Nowadays, information and communications technology systems are fundamental assets of our social and economical model, and thus they should be properly protected against the malicious activity of cybercriminals. Defence mechanisms are generally articulated around tools that trace and store information in several ways, the simplest one being the generation of plain text files coined as security logs. Such log files are usually inspected, in a semi-automatic way, by security analysts to detect events that may affect system integrity, confidentiality and availability. On this basis, we propose a parameter-free method to detect security incidents from structured text regardless its nature. We use the Normalized Compression Distance to obtain a set of features that can be used by a Support Vector Machine to classify events from a heterogeneous cybersecurity environment. In particular, we explore and validate the application of our method in four different cybersecurity domains: HTTP anomaly identification, spam detection, Domain Generation Algorithms tracking and sentiment analysis. The results obtained show the validity and flexibility of our approach in different security scenarios with a low configuration burdenThis research has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No. 872855 (TRESCA project), from the Comunidad de Madrid (Spain) under the projects CYNAMON (P2018/TCS-4566) and S2017/BMD-3688, co-financed with FSE and FEDER EU funds, by the Consejo Superior de Investigaciones Científicas (CSIC) under the project LINKA20216 (“Advancing in cybersecurity technologies”, i-LINK+ program), and by Spanish project MINECO/FEDER TIN2017-84452-

    Anomaly Detection and Encrypted Programming Forensics for Automation Controllers

    Get PDF
    Securing the critical infrastructure of the United States is of utmost importance in ensuring the security of the nation. To secure this complex system a structured approach such as the NIST Cybersecurity framework is used, but systems are only as secure as the sum of their parts. Understanding the capabilities of the individual devices, developing tools to help detect misoperations, and providing forensic evidence for incidence response are all essential to mitigating risk. This thesis examines the SEL-3505 RTAC to demonstrate the importance of existing security capabilities as well as creating new processes and tools to support the NIST Framework. The research examines the potential pitfalls of having small-form factor devices in poorly secured and geographically disparate locations. Additionally, the research builds a data-collection framework to provide a proof of concept anomaly detection system for detecting network intrusions by recognizing the change in task time distribution. Statistical tests distinguish between normal and anomalous behaviour. The high true positive rates and low false positive rates show the merit of such an anomaly detection system. Finally, the work presents a network forensic process for recreating control logic from encrypted programming traffic

    Building Resilience in Cybersecurity -- An Artificial Lab Approach

    Get PDF
    Based on classical contagion models we introduce an artificial cyber lab: the digital twin of a complex cyber system in which possible cyber resilience measures may be implemented and tested. Using the lab, in numerical case studies, we identify two classes of measures to control systemic cyber risks: security- and topology-based interventions. We discuss the implications of our findings on selected real-world cybersecurity measures currently applied in the insurance and regulation practice or under discussion for future cyber risk control. To this end, we provide a brief overview of the current cybersecurity regulation and emphasize the role of insurance companies as private regulators. Moreover, from an insurance point of view, we provide first attempts to design systemic cyber risk obligations and to measure the systemic risk contribution of individual policyholders

    RCVaR: an Economic Approach to Estimate Cyberattacks Costs using Data from Industry Reports

    Full text link
    Digitization increases business opportunities and the risk of companies being victims of devastating cyberattacks. Therefore, managing risk exposure and cybersecurity strategies is essential for digitized companies that want to survive in competitive markets. However, understanding company-specific risks and quantifying their associated costs is not trivial. Current approaches fail to provide individualized and quantitative monetary estimations of cybersecurity impacts. Due to limited resources and technical expertise, SMEs and even large companies are affected and struggle to quantify their cyberattack exposure. Therefore, novel approaches must be placed to support the understanding of the financial loss due to cyberattacks. This article introduces the Real Cyber Value at Risk (RCVaR), an economical approach for estimating cybersecurity costs using real-world information from public cybersecurity reports. RCVaR identifies the most significant cyber risk factors from various sources and combines their quantitative results to estimate specific cyberattacks costs for companies. Furthermore, RCVaR extends current methods to achieve cost and risk estimations based on historical real-world data instead of only probability-based simulations. The evaluation of the approach on unseen data shows the accuracy and efficiency of the RCVaR in predicting and managing cyber risks. Thus, it shows that the RCVaR is a valuable addition to cybersecurity planning and risk management processes

    A Comparative Study on Machine Learning Algorithms for Network Defense

    Get PDF
    Network security specialists use machine learning algorithms to detect computer network attacks and prevent unauthorized access to their networks. Traditionally, signature and anomaly detection techniques have been used for network defense. However, detection techniques must adapt to keep pace with continuously changing security attacks. Therefore, machine learning algorithms always learn from experience and are appropriate tools for this adaptation. In this paper, ten machine learning algorithms were trained with the KDD99 dataset with labels, then they were tested with different dataset without labels. The researchers investigate the speed and the efficiency of these machine learning algorithms in terms of several selected benchmarks such as time to build models, kappa statistic, root mean squared error, accuracy by attack class, and percentage of correctly classified instances of the classifier algorithms
    corecore