200 research outputs found

    Automated CVE Analysis for Threat Prioritization and Impact Prediction

    Full text link
    The Common Vulnerabilities and Exposures (CVE) are pivotal information for proactive cybersecurity measures, including service patching, security hardening, and more. However, CVEs typically offer low-level, product-oriented descriptions of publicly disclosed cybersecurity vulnerabilities, often lacking the essential attack semantic information required for comprehensive weakness characterization and threat impact estimation. This critical insight is essential for CVE prioritization and the identification of potential countermeasures, particularly when dealing with a large number of CVEs. Current industry practices involve manual evaluation of CVEs to assess their attack severities using the Common Vulnerability Scoring System (CVSS) and mapping them to Common Weakness Enumeration (CWE) for potential mitigation identification. Unfortunately, this manual analysis presents a major bottleneck in the vulnerability analysis process, leading to slowdowns in proactive cybersecurity efforts and the potential for inaccuracies due to human errors. In this research, we introduce our novel predictive model and tool (called CVEDrill) which revolutionizes CVE analysis and threat prioritization. CVEDrill accurately estimates the CVSS vector for precise threat mitigation and priority ranking and seamlessly automates the classification of CVEs into the appropriate CWE hierarchy classes. By harnessing CVEDrill, organizations can now implement cybersecurity countermeasure mitigation with unparalleled accuracy and timeliness, surpassing in this domain the capabilities of state-of-the-art tools like ChaptGPT

    Countering Cybersecurity Vulnerabilities in the Power System

    Get PDF
    Security vulnerabilities in software pose an important threat to power grid security, which can be exploited by attackers if not properly addressed. Every month, many vulnerabilities are discovered and all the vulnerabilities must be remediated in a timely manner to reduce the chance of being exploited by attackers. In current practice, security operators have to manually analyze each vulnerability present in their assets and determine the remediation actions in a short time period, which involves a tremendous amount of human resources for electric utilities. To solve this problem, we propose a machine learning-based automation framework to automate vulnerability analysis and determine the remediation actions for electric utilities. Then the determined remediation actions will be applied to the system to remediate vulnerabilities. However, not all vulnerabilities can be remediated quickly due to limited resources and the remediation action applying order will significantly affect the system\u27s risk level. Thus it is important to schedule which vulnerabilities should be remediated first. We will model this as a scheduling optimization problem to schedule the remediation action applying order to minimize the total risk by utilizing vulnerabilities\u27 impact and their probabilities of being exploited. Besides, an electric utility also needs to know whether vulnerabilities have already been exploited specifically in their own power system. If a vulnerability is exploited, it has to be addressed immediately. Thus, it is important to identify whether some vulnerabilities have been taken advantage of by attackers to launch attacks. Different vulnerabilities may require different identification methods. In this dissertation, we explore identifying exploited vulnerabilities by detecting and localizing false data injection attacks and give a case study in the Automatic Generation Control (AGC) system, which is a key control system to keep the power system\u27s balance. However, malicious measurements can be injected to exploited devices to mislead AGC to make false power generation adjustment which will harm power system operations. We propose Long Short Term Memory (LSTM) Neural Network-based methods and a Fourier Transform-based method to detect and localize such false data injection attacks. Detection and localization of such attacks could provide further information to better prioritize vulnerability remediation actions

    SecRush – New Generation Vulnerability Management Framework

    Get PDF
    Tese de Mestrado, Segurança Informática, 2022, Universidade de Lisboa, Faculdade de CiênciasVulnerabilities have been increasing over the years without signs of decreasing soon. With this ex ponential growth, it is important for organizations to define a vulnerability management plan to proceed with what should be done if they encounter a vulnerability. However, existing plans and metrics do not fit the current reality. Existing plans are not independent of vulnerability detection tools. The classifica tion systems currently used (the most common is CVSS) fail to provide information on the variation of risk that a particular vulnerability entails for the organization. As this is not constant, being exception ally high when there is a form of active exploitation, as well as its location in the network and business needs. SecRush presents itself as a new vulnerability management framework with a new risk-based vulnerability management process. It has a set of procedures inspired by agile methodologies to mitigate vulnerabilities and a new classification system - SecScore – able to provide a prioritization in context with the organization. SecScore varies its ranking through temporal factors (specific risk index depend ing on the organization’s risk appetite and the availability of an exploit) and environmental factors (asset visibility to the external network and importance of the asset to the organization’s mission). This project intends not only to contribute with a set of procedures independent of the security tools used but also to improve the currently existing classification systems for prioritization, which cannot adapt to the different contexts in which they are found

    Smart Intrusion Detection System for DMZ

    Get PDF
    Prediction of network attacks and machine understandable security vulnerabilities are complex tasks for current available Intrusion Detection System [IDS]. IDS software is important for an enterprise network. It logs security information occurred in the network. In addition, IDSs are useful in recognizing malicious hack attempts, and protecting it without the need for change to client‟s software. Several researches in the field of machine learning have been applied to make these IDSs better a d smarter. In our work, we propose approach for making IDSs more analytical, using semantic technology. We made a useful semantic connection between IDSs and National Vulnerability Databases [NVDs], to make the system semantically analyzed each attack logged, so it can perform prediction about incoming attacks or services that might be in danger. We built our ontology skeleton based on standard network security. Furthermore, we added useful classes and relations that are specific for DMZ network services. In addition, we made an option to mallow the user to update the ontology skeleton automatically according to the network needs. Our work is evaluated and validated using four different methods: we presented a prototype that works over the web. Also, we applied KDDCup99 dataset to the prototype. Furthermore,we modeled our system using queuing model, and simulated it using Anylogic simulator. Validating the system using KDDCup99 benchmark shows good results law false positive attacks prediction. Modeling the system in a queuing model allows us to predict the behavior of the system in a multi-users system for heavy network traffic

    Vulnerability prediction for secure healthcare supply chain service delivery

    Get PDF
    Healthcare organisations are constantly facing sophisticated cyberattacks due to the sensitivity and criticality of patient health care information and wide connectivity of medical devices. Such attacks can pose potential disruptions to critical services delivery. There are number of existing works that focus on using Machine Learning(ML) models for pre-dicting vulnerability and exploitation but most of these works focused on parameterized values to predict severity and exploitability. This paper proposes a novel method that uses ontology axioms to define essential concepts related to the overall healthcare ecosystem and to ensure semantic consistency checking among such concepts. The application of on-tology enables the formal specification and description of healthcare ecosystem and the key elements used in vulnerabil-ity assessment as a set of concepts. Such specification also strengthens the relationships that exist between healthcare-based and vulnerability assessment concepts, in addition to semantic definition and reasoning of the concepts. Our work also makes use of Machine Learning techniques to predict possible security vulnerabilities in health care supply chain services. The paper demonstrates the applicability of our work by using vulnerability datasets to predict the exploitation. The results show that the conceptualization of healthcare sector cybersecurity using an ontological approach provides mechanisms to better understand the correlation between the healthcare sector and the security domain, while the ML algorithms increase the accuracy of the vulnerability exploitability prediction. Our result shows that using Linear Regres-sion, Decision Tree and Random Forest provided a reasonable result for predicting vulnerability exploitability

    A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities

    Get PDF
    The relentless and often haphazard process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals. The key challenge they face is trying to identify a remediation scheme specific to in-house, organizational objectives. Without a strategy, the result is a patchwork of fixes applied to a tide of vulnerabilities, any one of which could be the single point of failure in an otherwise formidable defense. This means one of the biggest challenges in vulnerability management relates to prioritization. Given that so few vulnerabilities are a focus of real-world attacks, a practical remediation strategy is to identify vulnerabilities likely to be exploited and focus efforts towards remediating those vulnerabilities first. The goal of this research is to demonstrate that aggregating and synthesizing readily accessible, public data sources to provide personalized, automated recommendations that an organization can use to prioritize its vulnerability management strategy will offer significant improvements over what is currently realized using the Common Vulnerability Scoring System (CVSS). We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK. We identify the data mining steps needed to acquire, standardize, and integrate publicly available cyber intelligence data sets into a robust knowledge graph from which stakeholders can infer business logic related to known threats. We tested our approach by identifying vulnerabilities in academic and common software associated with six universities and four government facilities. Ranking policy performance was measured using the Normalized Discounted Cumulative Gain (nDCG). Our results show an average 71.5% to 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors. The ROI of patching using our policies resulted in a savings in the range of 23.3% to 25.5% in annualized unit costs. Our results demonstrate the efficiency of creating knowledge graphs to link large data sets to facilitate semantic queries and create data-driven, flexible ranking policies. Additionally, our framework uses only open standards, making implementation and improvement feasible for cyber practitioners and academia

    Quantifying the security risk of discovering and exploiting software vulnerabilities

    Get PDF
    2016 Summer.Includes bibliographical references.Most of the attacks on computer systems and networks are enabled by vulnerabilities in a software. Assessing the security risk associated with those vulnerabilities is important. Risk mod- els such as the Common Vulnerability Scoring System (CVSS), Open Web Application Security Project (OWASP) and Common Weakness Scoring System (CWSS) have been used to qualitatively assess the security risk presented by a vulnerability. CVSS metrics are the de facto standard and its metrics need to be independently evaluated. In this dissertation, we propose using a quantitative approach that uses an actual data, mathematical and statistical modeling, data analysis, and measurement. We have introduced a novel vulnerability discovery model, Folded model, that estimates the risk of vulnerability discovery based on the number of residual vulnerabilities in a given software. In addition to estimating the risk of vulnerabilities discovery of a whole system, this dissertation has furthermore introduced a novel metrics termed time to vulnerability discovery to assess the risk of an individual vulnerability discovery. We also have proposed a novel vulnerability exploitability risk measure termed Structural Severity. It is based on software properties, namely attack entry points, vulnerability location, the presence of the dangerous system calls, and reachability analysis. In addition to measurement, this dissertation has also proposed predicting vulnerability exploitability risk using internal software metrics. We have also proposed two approaches for evaluating CVSS Base metrics. Using the availability of exploits, we first have evaluated the performance of the CVSS Exploitability factor and have compared its performance to Microsoft (MS) rating system. The results showed that exploitability metrics of CVSS and MS have a high false positive rate. This finding has motivated us to conduct further investigation. To that end, we have introduced vulnerability reward programs (VRPs) as a novel ground truth to evaluate the CVSS Base scores. The results show that the notable lack of exploits for high severity vulnerabilities may be the result of prioritized fixing of vulnerabilities

    Design of risk assessment methodology for IT/OT systems : Employment of online security catalogues in the risk assessment process

    Get PDF
    The revolution brought about with the transition from Industry 1.0 to 4.0 has expanded the cyber threats from Information Technology (IT) to Operational Technology (OT) systems. However, unlike IT systems, identifying the relevant threats in OT is more complex as penetration testing applications highly restrict OT availability. The complexity is enhanced by the significant amount of information available in online security catalogues, like Common Weakness Enumeration, Common Vulnerabilities and Exposures and Common Attack Pattern Enumeration and Classification, and the incomplete organisation of their relationships. These issues hinder the identification of relevant threats during risk assessment of OT systems. In this thesis, a methodology is proposed to reduce the aforementioned complexities and improve relationships among online security catalogues to identify the cybersecurity risk of IT/OT systems. The weaknesses, vulnerabilities and attack patterns stored in the online catalogues are extracted and categorised by mapping their potential mitigations to their security requirements, which are introduced on security standards that the system should comply with, like the ISA/IEC 62443. The system's assets are connected to the potential threats through the security requirements, which, combined with the relationships established among the catalogues, offer the basis for graphical representation of the results by employing tree-shaped graphical models. The methodology is tested on the components of an Information and Communication Technology system, whose results verify the simplification of the threat identification process but highlight the need for an in-depth understanding of the system. Hence, the methodology offers a significant basis on which further work can be applied to standardise the risk assessment process of IT/OT systems
    corecore