46 research outputs found

    Impact assessment for vulnerabilities in open-source software libraries

    Full text link
    Software applications integrate more and more open-source software (OSS) to benefit from code reuse. As a drawback, each vulnerability discovered in bundled OSS potentially affects the application. Upon the disclosure of every new vulnerability, the application vendor has to decide whether it is exploitable in his particular usage context, hence, whether users require an urgent application patch containing a non-vulnerable version of the OSS. Current decision making is mostly based on high-level vulnerability descriptions and expert knowledge, thus, effort intense and error prone. This paper proposes a pragmatic approach to facilitate the impact assessment, describes a proof-of-concept for Java, and examines one example vulnerability as case study. The approach is independent from specific kinds of vulnerabilities or programming languages and can deliver immediate results

    Vulnerable Open Source Dependencies: Counting Those That Matter

    Full text link
    BACKGROUND: Vulnerable dependencies are a known problem in today's open-source software ecosystems because OSS libraries are highly interconnected and developers do not always update their dependencies. AIMS: In this paper we aim to present a precise methodology, that combines the code-based analysis of patches with information on build, test, update dates, and group extracted from the very code repository, and therefore, caters to the needs of industrial practice for correct allocation of development and audit resources. METHOD: To understand the industrial impact of the proposed methodology, we considered the 200 most popular OSS Java libraries used by SAP in its own software. Our analysis included 10905 distinct GAVs (group, artifact, version) when considering all the library versions. RESULTS: We found that about 20% of the dependencies affected by a known vulnerability are not deployed, and therefore, they do not represent a danger to the analyzed library because they cannot be exploited in practice. Developers of the analyzed libraries are able to fix (and actually responsible for) 82% of the deployed vulnerable dependencies. The vast majority (81%) of vulnerable dependencies may be fixed by simply updating to a new version, while 1% of the vulnerable dependencies in our sample are halted, and therefore, potentially require a costly mitigation strategy. CONCLUSIONS: Our case study shows that the correct counting allows software development companies to receive actionable information about their library dependencies, and therefore, correctly allocate costly development and audit resources, which is spent inefficiently in case of distorted measurements.Comment: This is a pre-print of the paper that appears, with the same title, in the proceedings of the 12th International Symposium on Empirical Software Engineering and Measurement, 201

    Secure Software Development in the Era of Fluid Multi-party Open Software and Services

    Full text link
    Pushed by market forces, software development has become fast-paced. As a consequence, modern development projects are assembled from 3rd-party components. Security & privacy assurance techniques once designed for large, controlled updates over months or years, must now cope with small, continuous changes taking place within a week, and happening in sub-components that are controlled by third-party developers one might not even know they existed. In this paper, we aim to provide an overview of the current software security approaches and evaluate their appropriateness in the face of the changed nature in software development. Software security assurance could benefit by switching from a process-based to an artefact-based approach. Further, security evaluation might need to be more incremental, automated and decentralized. We believe this can be achieved by supporting mechanisms for lightweight and scalable screenings that are applicable to the entire population of software components albeit there might be a price to pay.Comment: 7 pages, 1 figure, to be published in Proceedings of International Conference on Software Engineering - New Ideas and Emerging Result

    Automated Mapping of Vulnerability Advisories onto their Fix Commits in Open Source Repositories

    Full text link
    The lack of comprehensive sources of accurate vulnerability data represents a critical obstacle to studying and understanding software vulnerabilities (and their corrections). In this paper, we present an approach that combines heuristics stemming from practical experience and machine-learning (ML) - specifically, natural language processing (NLP) - to address this problem. Our method consists of three phases. First, an advisory record containing key information about a vulnerability is extracted from an advisory (expressed in natural language). Second, using heuristics, a subset of candidate fix commits is obtained from the source code repository of the affected project by filtering out commits that are known to be irrelevant for the task at hand. Finally, for each such candidate commit, our method builds a numerical feature vector reflecting the characteristics of the commit that are relevant to predicting its match with the advisory at hand. The feature vectors are then exploited for building a final ranked list of candidate fixing commits. The score attributed by the ML model to each feature is kept visible to the users, allowing them to interpret of the predictions. We evaluated our approach using a prototype implementation named Prospector on a manually curated data set that comprises 2,391 known fix commits corresponding to 1,248 public vulnerability advisories. When considering the top-10 commits in the ranked results, our implementation could successfully identify at least one fix commit for up to 84.03% of the vulnerabilities (with a fix commit on the first position for 65.06% of the vulnerabilities). In conclusion, our method reduces considerably the effort needed to search OSS repositories for the commits that fix known vulnerabilities

    Dependability in dynamic, evolving and heterogeneous systems: the CONNECT approach

    Get PDF
    International audienceThe EU Future and Emerging Technologies (FET) Project Connect aims at dropping the heterogeneity barriers that prevent the eternality of networking systems through a revolutionary approach: to synthesise on-the-y the Connectors via which networked systems communicate. The Connect approach, however, comes at risk from the standpoint of dependability, stressing the need for methods and tools that ensure resilience to faults, errors and malicious attacks of the dynamically Connected system. We are investigating a comprehensive approach, which combines dependability analysis, security enforcement and trust assessment, and is centred around a lightweight adaptive monitoring framework. In this project paper, we overview the research that we are undertaking towards this objective and propose a unifying workflow process that encompasses all the Connect dependability/security/trust concepts and models

    Capturing functional and non-functional connector

    Get PDF
    The CONNECT Integrated Project aims to develop a novel networking infrastructure that will support composition of networked systems with on-the-fly connector synthesis. The role of this work package is to investigate the foundations and verification methods for composable connectors. In this deliverable, we set the scene for the formulation of the modelling framework by surveying existing connector modelling formalisms. We covered not only classical connector algebra formalisms, but also, where appropriate, their corresponding quantitative extensions. All formalisms have been evaluated against a set of key dimensions of interest agreed upon in the CONNECT project. Based on these investigations, we concluded that none of the modelling formalisms available at present satisfy our eight dimensions. We will use the outcome of the survey to guide the formulation of a compositional modelling formalism tailored to the specific requirements of the CONNECT project. Furthermore, we considered the range of non-functional properties that are of interest to CONNECT, and reviewed existing specification formalisms for capturing them, together with the corresponding modelchecking algorithms and tool support. Consequently, we described the scientific advances concerning model-checking algorithms and tools, which are partial contribution towards future deliverables: an approach for online verification (part of D2.2), automated abstraction-refinement for probabilistic realtime systems (part of D2.2 and D2.4), and compositional probabilistic verification within PRISM, to serve as a foundation of future research on quantitative assume-guarantee compositional reasoning (part of D2.2 and D2.4)
    corecore