1,195 research outputs found

    Identification and Estimation in an Incoherent Model of Contagion

    Get PDF
    This paper deals with the issues of identification and estimation in the canonical model of contagion advanced in Pesaran and Pick (2007). The model is a two-equation nonlinear simultaneous equations system with endogenous dummy variables; it also represents an extension of univariate threshold autoregressive (TAR) models to a simultaneous equations framework. For a range of economic fundamentals, the model produces multiple (i.e. two) equilibria, and the choice of the equilibrium is modeled as being driven by a Bernoulli process; further, the presence of multiple equilibria leads to an incoherent econometric specification. The coherency issue is then reflected in the analytical expression for the likelihood function derived in the paper. It is proved that neither identification nor Full Information Maximum Likelihood (FIML) estimation of the model require knowledge of the Bernoulli process driving the solution choice in the multiple equilibria region. Monte Carlo experiments show that the FIML estimator performs better than the GIVE estimators proposed in Pesaran and Pick (2007). Finally, an empirical illustration based on stock market returns is provided

    My Software has a Vulnerability, should I worry?

    Get PDF
    (U.S) Rule-based policies to mitigate software risk suggest to use the CVSS score to measure the individual vulnerability risk and act accordingly: an HIGH CVSS score according to the NVD (National (U.S.) Vulnerability Database) is therefore translated into a "Yes". A key issue is whether such rule is economically sensible, in particular if reported vulnerabilities have been actually exploited in the wild, and whether the risk score do actually match the risk of actual exploitation. We compare the NVD dataset with two additional datasets, the EDB for the white market of vulnerabilities (such as those present in Metasploit), and the EKITS for the exploits traded in the black market. We benchmark them against Symantec's threat explorer dataset (SYM) of actual exploit in the wild. We analyze the whole spectrum of CVSS submetrics and use these characteristics to perform a case-controlled analysis of CVSS scores (similar to those used to link lung cancer and smoking) to test its reliability as a risk factor for actual exploitation. We conclude that (a) fixing just because a high CVSS score in NVD only yields negligible risk reduction, (b) the additional existence of proof of concepts exploits (e.g. in EDB) may yield some additional but not large risk reduction, (c) fixing in response to presence in black markets yields the equivalent risk reduction of wearing safety belt in cars (you might also die but still..). On the negative side, our study shows that as industry we miss a metric with high specificity (ruling out vulns for which we shouldn't worry). In order to address the feedback from BlackHat 2013's audience, the final revision (V3) provides additional data in Appendix A detailing how the control variables in the study affect the results.Comment: 12 pages, 4 figure

    A preliminary analysis of vulnerability scores for attacks in wild

    Get PDF
    NVD and Exploit-DB are the de facto standard databases used for research on vulnerabilities, and the CVSS score is the standard measure for risk. On open question is whether such databases and scores are actually representative of at- tacks found in the wild. To address this question we have constructed a database (EKITS) based on the vulnerabili- ties currently used in exploit kits from the black market and extracted another database of vulnerabilities from Symantec's Threat Database (SYM). Our nal conclusion is that the NVD and EDB databases are not a reliable source of in- formation for exploits in the wild, even after controlling for the CVSS and exploitability subscore. An high or medium CVSS score shows only a signi cant sensitivity (i.e. prediction of attacks in the wild) for vulnerabilities present in exploit kits (EKITS) in the black market. All datasets ex- hibit a low speci city

    The Effect of Security Education and Expertise on Security Assessments: the Case of Software Vulnerabilities

    Get PDF
    In spite of the growing importance of software security and the industry demand for more cyber security expertise in the workforce, the effect of security education and experience on the ability to assess complex software security problems has only been recently investigated. As proxy for the full range of software security skills, we considered the problem of assessing the severity of software vulnerabilities by means of a structured analysis methodology widely used in industry (i.e. the Common Vulnerability Scoring System (\CVSS) v3), and designed a study to compare how accurately individuals with background in information technology but different professional experience and education in cyber security are able to assess the severity of software vulnerabilities. Our results provide some structural insights into the complex relationship between education or experience of assessors and the quality of their assessments. In particular we find that individual characteristics matter more than professional experience or formal education; apparently it is the \emph{combination} of skills that one owns (including the actual knowledge of the system under study), rather than the specialization or the years of experience, to influence more the assessment quality. Similarly, we find that the overall advantage given by professional expertise significantly depends on the composition of the individual security skills as well as on the available information.Comment: Presented at the Workshop on the Economics of Information Security (WEIS 2018), Innsbruck, Austria, June 201

    Vulnerable Open Source Dependencies: Counting Those That Matter

    Full text link
    BACKGROUND: Vulnerable dependencies are a known problem in today's open-source software ecosystems because OSS libraries are highly interconnected and developers do not always update their dependencies. AIMS: In this paper we aim to present a precise methodology, that combines the code-based analysis of patches with information on build, test, update dates, and group extracted from the very code repository, and therefore, caters to the needs of industrial practice for correct allocation of development and audit resources. METHOD: To understand the industrial impact of the proposed methodology, we considered the 200 most popular OSS Java libraries used by SAP in its own software. Our analysis included 10905 distinct GAVs (group, artifact, version) when considering all the library versions. RESULTS: We found that about 20% of the dependencies affected by a known vulnerability are not deployed, and therefore, they do not represent a danger to the analyzed library because they cannot be exploited in practice. Developers of the analyzed libraries are able to fix (and actually responsible for) 82% of the deployed vulnerable dependencies. The vast majority (81%) of vulnerable dependencies may be fixed by simply updating to a new version, while 1% of the vulnerable dependencies in our sample are halted, and therefore, potentially require a costly mitigation strategy. CONCLUSIONS: Our case study shows that the correct counting allows software development companies to receive actionable information about their library dependencies, and therefore, correctly allocate costly development and audit resources, which is spent inefficiently in case of distorted measurements.Comment: This is a pre-print of the paper that appears, with the same title, in the proceedings of the 12th International Symposium on Empirical Software Engineering and Measurement, 201
    • …
    corecore