160,445 research outputs found

    Network Security Metrics: Estimating the Resilience of Networks Against Zero Day Attacks

    Get PDF
    Computer networks are playing the role of nervous systems in many critical infrastructures, governmental and military organizations, and enterprises today. Protecting such mission critical networks means more than just patching known vulnerabilities and deploying firewalls or IDSs. Proper metrics are needed in evaluating the security level of networks and provide security enhanced solutions. However, without considering unknown zero-day vulnerabilities, security metrics are insufficient to capture the true security level of a network. My Ph.D's work is aiming to develop a series of novel network security metrics with a special focus on modeling zero day attacks and study the relationships between software features and vulnerabilities. In the first work, we take the first step toward formally modeling network diversity as a security metric by designing and evaluating a series of diversity metrics. In particular, we first devise a biodiversity-inspired metric based on the effective number of distinct resources. We then propose two complementary diversity metrics, based on the least and the average attacking efforts, respectively. In the second topic, we lift the attack surface concept, which calculates the intrinsic properties of software applications, to the network level as a security metric for evaluating the resilience of networks against potential zero day attacks. First, we develop models for aggregating the attack surface among different resources inside a network. Second, we design heuristic algorithms to avoid the costly calculation of attack surface. Predicting and studying the software vulnerability both help administrators to improve security deployment for their organizations and to choose the right applications among those with similar functionality, and for the software vendors to estimate the security level of their software applications. In the third topic, we perform a large-scale empirical study on datasets from GitHub and different versions of Chrome to study the relationship between software features and the number of vulnerabilities. This study quantitatively demonstrates the importance of features in the vulnerability discovery process based on machine learning techniques, which provides inputs for network level security metrics. Those features could serve as inputs for future network security metrics

    Structured Review of the Evidence for Effects of Code Duplication on Software Quality

    Get PDF
    This report presents the detailed steps and results of a structured review of code clone literature. The aim of the review is to investigate the evidence for the claim that code duplication has a negative effect on code changeability. This report contains only the details of the review for which there is not enough place to include them in the companion paper published at a conference (Hordijk, Ponisio et al. 2009 - Harmfulness of Code Duplication - A Structured Review of the Evidence)

    Evaluation Criteria for Object-oriented Metrics

    Get PDF
    In this paper an evaluation model for object-oriented (OO) metrics is proposed. We have evaluated the existing evaluation criteria for OO metrics, and based on the observations, a model is proposed which tries to cover most of the features for the evaluation of OO metrics. The model is validated by applying it to existing OO metrics. In contrast to the other existing criteria, the proposed model is simple in implementation and includes the practical and important aspects of evaluation; hence it suitable to evaluate and validate any OO complexity metric

    BigDataBench: a Big Data Benchmark Suite from Internet Services

    Full text link
    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purposes mentioned above. This paper presents our joint research efforts on this issue with several industrial partners. Our big data benchmark suite BigDataBench not only covers broad application scenarios, but also includes diverse and representative data sets. BigDataBench is publicly available from http://prof.ict.ac.cn/BigDataBench . Also, we comprehensively characterize 19 big data workloads included in BigDataBench with varying data inputs. On a typical state-of-practice processor, Intel Xeon E5645, we have the following observations: First, in comparison with the traditional benchmarks: including PARSEC, HPCC, and SPECCPU, big data applications have very low operation intensity; Second, the volume of data input has non-negligible impact on micro-architecture characteristics, which may impose challenges for simulation-based big data architecture research; Last but not least, corroborating the observations in CloudSuite and DCBench (which use smaller data inputs), we find that the numbers of L1 instruction cache misses per 1000 instructions of the big data applications are higher than in the traditional benchmarks; also, we find that L3 caches are effective for the big data applications, corroborating the observation in DCBench.Comment: 12 pages, 6 figures, The 20th IEEE International Symposium On High Performance Computer Architecture (HPCA-2014), February 15-19, 2014, Orlando, Florida, US

    Software Measurement Activities in Small and Medium Enterprises: an Empirical Assessment

    Get PDF
    An empirical study for evaluating the proper implementation of measurement/metric programs in software companies in one area of Turkey is presented. The research questions are discussed and validated with the help of senior software managers (more than 15 years’ experience) and then used for interviewing a variety of medium and small scale software companies in Ankara. Observations show that there is a common reluctance/lack of interest in utilizing measurements/metrics despite the fact that they are well known in the industry. A side product of this research is that internationally recognized standards such as ISO and CMMI are pursued if they are a part of project/job requirements; without these requirements, introducing those standards to the companies remains as a long-term target to increase quality
    • 

    corecore