103 research outputs found

    An Empirical Analysis of Vulnerabilities in Python Packages for Web Applications

    Full text link
    This paper examines software vulnerabilities in common Python packages used particularly for web development. The empirical dataset is based on the PyPI package repository and the so-called Safety DB used to track vulnerabilities in selected packages within the repository. The methodological approach builds on a release-based time series analysis of the conditional probabilities for the releases of the packages to be vulnerable. According to the results, many of the Python vulnerabilities observed seem to be only modestly severe; input validation and cross-site scripting have been the most typical vulnerabilities. In terms of the time series analysis based on the release histories, only the recent past is observed to be relevant for statistical predictions; the classical Markov property holds.Comment: Forthcoming in: Proceedings of the 9th International Workshop on Empirical Software Engineering in Practice (IWESEP 2018), Nara, IEE

    A preliminary analysis of vulnerability scores for attacks in wild

    Get PDF
    NVD and Exploit-DB are the de facto standard databases used for research on vulnerabilities, and the CVSS score is the standard measure for risk. On open question is whether such databases and scores are actually representative of at- tacks found in the wild. To address this question we have constructed a database (EKITS) based on the vulnerabili- ties currently used in exploit kits from the black market and extracted another database of vulnerabilities from Symantec's Threat Database (SYM). Our nal conclusion is that the NVD and EDB databases are not a reliable source of in- formation for exploits in the wild, even after controlling for the CVSS and exploitability subscore. An high or medium CVSS score shows only a signi cant sensitivity (i.e. prediction of attacks in the wild) for vulnerabilities present in exploit kits (EKITS) in the black market. All datasets ex- hibit a low speci city

    My Software has a Vulnerability, should I worry?

    Get PDF
    (U.S) Rule-based policies to mitigate software risk suggest to use the CVSS score to measure the individual vulnerability risk and act accordingly: an HIGH CVSS score according to the NVD (National (U.S.) Vulnerability Database) is therefore translated into a "Yes". A key issue is whether such rule is economically sensible, in particular if reported vulnerabilities have been actually exploited in the wild, and whether the risk score do actually match the risk of actual exploitation. We compare the NVD dataset with two additional datasets, the EDB for the white market of vulnerabilities (such as those present in Metasploit), and the EKITS for the exploits traded in the black market. We benchmark them against Symantec's threat explorer dataset (SYM) of actual exploit in the wild. We analyze the whole spectrum of CVSS submetrics and use these characteristics to perform a case-controlled analysis of CVSS scores (similar to those used to link lung cancer and smoking) to test its reliability as a risk factor for actual exploitation. We conclude that (a) fixing just because a high CVSS score in NVD only yields negligible risk reduction, (b) the additional existence of proof of concepts exploits (e.g. in EDB) may yield some additional but not large risk reduction, (c) fixing in response to presence in black markets yields the equivalent risk reduction of wearing safety belt in cars (you might also die but still..). On the negative side, our study shows that as industry we miss a metric with high specificity (ruling out vulns for which we shouldn't worry). In order to address the feedback from BlackHat 2013's audience, the final revision (V3) provides additional data in Appendix A detailing how the control variables in the study affect the results.Comment: 12 pages, 4 figure

    The Effect of Security Education and Expertise on Security Assessments: the Case of Software Vulnerabilities

    Get PDF
    In spite of the growing importance of software security and the industry demand for more cyber security expertise in the workforce, the effect of security education and experience on the ability to assess complex software security problems has only been recently investigated. As proxy for the full range of software security skills, we considered the problem of assessing the severity of software vulnerabilities by means of a structured analysis methodology widely used in industry (i.e. the Common Vulnerability Scoring System (\CVSS) v3), and designed a study to compare how accurately individuals with background in information technology but different professional experience and education in cyber security are able to assess the severity of software vulnerabilities. Our results provide some structural insights into the complex relationship between education or experience of assessors and the quality of their assessments. In particular we find that individual characteristics matter more than professional experience or formal education; apparently it is the \emph{combination} of skills that one owns (including the actual knowledge of the system under study), rather than the specialization or the years of experience, to influence more the assessment quality. Similarly, we find that the overall advantage given by professional expertise significantly depends on the composition of the individual security skills as well as on the available information.Comment: Presented at the Workshop on the Economics of Information Security (WEIS 2018), Innsbruck, Austria, June 201

    INTRUSION DEFENSE MECHANISM USING ARTIFICIAL IMMUNE SYSTEM IN CLOUD COMPUTING (CLOUD SECURITY USING COMPUTATIONAL INTELLIGENCE)

    Get PDF
    Cloud is a general term used in organizations that host various service and deployment models. As cloud computing offers everything a service,it suffers from serious security issues. In addition, the multitenancy facility in the cloud provides storage in the third party data center which is considered to be a serious threat. These threats can be faced by both self-providers and their customers. Hence, the complexity of the security should be increased to a great extend such that it has an effective defense mechanism. Although data isolation is one of the remedies, it could not be a total solution. Hence, a complete architecture is proposed to provide complete defense mechanism. This defense mechanism ensures that the threats are blocked before it invades into the cloud environment. Therefore, we adopt the mechanism called artificial immune system which is derived from biologically inspired computing. This security strategy is based on artificial immune algorithm.Ă‚

    Vulnerability prediction for secure healthcare supply chain service delivery

    Get PDF
    Healthcare organisations are constantly facing sophisticated cyberattacks due to the sensitivity and criticality of patient health care information and wide connectivity of medical devices. Such attacks can pose potential disruptions to critical services delivery. There are number of existing works that focus on using Machine Learning(ML) models for pre-dicting vulnerability and exploitation but most of these works focused on parameterized values to predict severity and exploitability. This paper proposes a novel method that uses ontology axioms to define essential concepts related to the overall healthcare ecosystem and to ensure semantic consistency checking among such concepts. The application of on-tology enables the formal specification and description of healthcare ecosystem and the key elements used in vulnerabil-ity assessment as a set of concepts. Such specification also strengthens the relationships that exist between healthcare-based and vulnerability assessment concepts, in addition to semantic definition and reasoning of the concepts. Our work also makes use of Machine Learning techniques to predict possible security vulnerabilities in health care supply chain services. The paper demonstrates the applicability of our work by using vulnerability datasets to predict the exploitation. The results show that the conceptualization of healthcare sector cybersecurity using an ontological approach provides mechanisms to better understand the correlation between the healthcare sector and the security domain, while the ML algorithms increase the accuracy of the vulnerability exploitability prediction. Our result shows that using Linear Regres-sion, Decision Tree and Random Forest provided a reasonable result for predicting vulnerability exploitability

    False data injection attack detection in smart grid

    Get PDF
    Smart grid is a distributed and autonomous energy delivery infrastructure that constantly monitors the operational state of its overall network using smart techniques and state estimation. State estimation is a powerful technique that is used to determine the overall operational state of the system based on a limited set of measurements collected through metering systems. Cyber-attacks pose serious risks to a smart grid state estimation that can cause disruptions and power outages resulting in huge economical losses and are therefore a big concern to a reliable national grid operation. False data injection attacks (FDIAs), engineered on the basis of the knowledge of the network configuration, are difficult to detect using the traditional data detection mechanisms. These detection schemes have been found vulnerable and failed to detect these FDIAs. FDIAs specifically target the state data and can manipulate the state measurements in such a way that these false measurements appear real to the main control systems. This research work explores the possibility of FDIA detection using state estimation in a distributed and partitioned smart grid. In order to detect FDIAs we use measurements for residual-based testing which creates an objective function; and the probability of erroneous data is determined from this residual test. In this test, a preset threshold is determined based on the prior history of the state data. FDIA cases are simulated within a smart grid considering that the Chi-square detection state estimator fails in identifying such attacks. We compute the objective function using the standard weighted least problem and then test the objective function against the value in the Chi-square table. The gain matrix and the Jacobian matrix are computed. The state variables are computed in the form of a voltage magnitude. The state variables are computed after the inception of an attack to assess these state magnitude results. Different sizes of partitioning are used to improve the overall sensitivity of the Chi-square results. Our additional estimator is based on a Kalman estimation that consists of the state prediction and state correction steps. In the first step, it obtains the state and matrix covariance prediction, and in the second step, it calculates the Kalman gain and the state and matrix covariance update steps. The set of points is created for the state vector x at a time instant t. The initial vector and covariance matrix are based on a priori knowledge of the historical estimates. A set of sigma points is estimated by the state update function. Sigma points refer to the minimal set of sampling points that are selected and transformed using nonlinear function, and the new mean and the covariance are formed out of these transformed points. The idea behind this is that it is easier to compute a Gaussian distribution than an arbitrary nonlinear function. The filter gain, the mean and the covariance are used to estimate the next state. Our simulation results show that the combination of Kalman estimation and distributed state estimation improves the overall stability index and vulnerability assessment score of the smart grid. We built a stability index table for a smart grid based on the state estimates value after the inception of an FDIA. The vulnerability assessment score of the smart grid is based on common vulnerability scoring system (CVSS) and state estimates under the influence of an FDIA. The simulations are conducted in the MATPOWER program and different electrical bus systems such as IEEE 14, 30, 39, 118 and 300 are tested. All the contributions have been published in reputable journals and conferences.Doctor of Philosoph

    A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities

    Get PDF
    The relentless and often haphazard process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals. The key challenge they face is trying to identify a remediation scheme specific to in-house, organizational objectives. Without a strategy, the result is a patchwork of fixes applied to a tide of vulnerabilities, any one of which could be the single point of failure in an otherwise formidable defense. This means one of the biggest challenges in vulnerability management relates to prioritization. Given that so few vulnerabilities are a focus of real-world attacks, a practical remediation strategy is to identify vulnerabilities likely to be exploited and focus efforts towards remediating those vulnerabilities first. The goal of this research is to demonstrate that aggregating and synthesizing readily accessible, public data sources to provide personalized, automated recommendations that an organization can use to prioritize its vulnerability management strategy will offer significant improvements over what is currently realized using the Common Vulnerability Scoring System (CVSS). We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK. We identify the data mining steps needed to acquire, standardize, and integrate publicly available cyber intelligence data sets into a robust knowledge graph from which stakeholders can infer business logic related to known threats. We tested our approach by identifying vulnerabilities in academic and common software associated with six universities and four government facilities. Ranking policy performance was measured using the Normalized Discounted Cumulative Gain (nDCG). Our results show an average 71.5% to 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors. The ROI of patching using our policies resulted in a savings in the range of 23.3% to 25.5% in annualized unit costs. Our results demonstrate the efficiency of creating knowledge graphs to link large data sets to facilitate semantic queries and create data-driven, flexible ranking policies. Additionally, our framework uses only open standards, making implementation and improvement feasible for cyber practitioners and academia

    WOPR: A Dynamic Cybersecurity Detection and Response Framework

    Get PDF
    Malware authors develop software to exploit the flaws in any platform and application which suffers a vulnerability in its defenses, be it through unpatched known attack vectors or zero-day attacks for which there is no current solution. It is the responsibility of cybersecurity personnel to monitor, detect, respond to and protect against such incidents that could affect their organization. Unfortunately, the low number of skilled, available cybersecurity professionals in the job market means that many positions go unfilled and cybersecurity threats are unknowingly allowed to negatively affect many enterprises.The demand for a greater cybersecurity posture has led several organizations to de- velop automated threat analysis tools which can be operated by less-skilled infor- mation security analysts and response teams. However, the diverse needs and organizational factors of most businesses presents a challenge for a “one size fits all” cybersecurity solution. Organizations in different industries may not have the same regulatory and standards compliance concerns due to processing different forms and classifications of data. As a result, many common security solutions are ill equipped to accurately model cybersecurity threats as they relate to each unique organization.We propose WOPR, a framework for automated static and dynamic analysis of software to identify malware threats, classify the nature of those threats, and deliver an appropriate automated incident response. Additionally, WOPR provides the end user the ability to adjust threat models to fit the risks relevant to an organization, allowing for bespoke automated cybersecurity threat management. Finally, WOPR presents a departure from traditional signature-based detection found in anti-virus and intrusion detection systems through learning system-level behavior and matching system calls with malicious behavior
    • …
    corecore