59,710 research outputs found

    Vulnerable Open Source Dependencies: Counting Those That Matter

    Full text link
    BACKGROUND: Vulnerable dependencies are a known problem in today's open-source software ecosystems because OSS libraries are highly interconnected and developers do not always update their dependencies. AIMS: In this paper we aim to present a precise methodology, that combines the code-based analysis of patches with information on build, test, update dates, and group extracted from the very code repository, and therefore, caters to the needs of industrial practice for correct allocation of development and audit resources. METHOD: To understand the industrial impact of the proposed methodology, we considered the 200 most popular OSS Java libraries used by SAP in its own software. Our analysis included 10905 distinct GAVs (group, artifact, version) when considering all the library versions. RESULTS: We found that about 20% of the dependencies affected by a known vulnerability are not deployed, and therefore, they do not represent a danger to the analyzed library because they cannot be exploited in practice. Developers of the analyzed libraries are able to fix (and actually responsible for) 82% of the deployed vulnerable dependencies. The vast majority (81%) of vulnerable dependencies may be fixed by simply updating to a new version, while 1% of the vulnerable dependencies in our sample are halted, and therefore, potentially require a costly mitigation strategy. CONCLUSIONS: Our case study shows that the correct counting allows software development companies to receive actionable information about their library dependencies, and therefore, correctly allocate costly development and audit resources, which is spent inefficiently in case of distorted measurements.Comment: This is a pre-print of the paper that appears, with the same title, in the proceedings of the 12th International Symposium on Empirical Software Engineering and Measurement, 201

    Moving Image Preservation in Libraries

    Get PDF
    published or submitted for publicatio

    Identifying and improving reusability based on coupling patterns

    Get PDF
    Open Source Software (OSS) communities have not yet taken full advantage of reuse mechanisms. Typically many OSS projects which share the same application domain and topic, duplicate effort and code, without fully leveraging the vast amounts of available code. This study proposes the empirical evaluation of source code folders of OSS projects in order to determine their actual internal reuse and their potential as shareable, fine-grained and externally reusable software components by future projects. This paper empirically analyzes four OSS systems, identifies which components (in the form of folders) are currently being reused internally and studies their coupling characteristics. Stable components (i.e., those which act as service providers rather than service consumers) are shown to be more likely to be reusable. As a means of supporting replication of these successful instances of OSS reuse, source folders with similar patterns are extracted from the studied systems, and identified as externally reusable components

    SLIS Student Research Journal, Vol.7, Iss.1

    Get PDF

    PowerAqua: fishing the semantic web

    Get PDF
    The Semantic Web (SW) offers an opportunity to develop novel, sophisticated forms of question answering (QA). Specifically, the availability of distributed semantic markup on a large scale opens the way to QA systems which can make use of such semantic information to provide precise, formally derived answers to questions. At the same time the distributed, heterogeneous, large-scale nature of the semantic information introduces significant challenges. In this paper we describe the design of a QA system, PowerAqua, designed to exploit semantic markup on the web to provide answers to questions posed in natural language. PowerAqua does not assume that the user has any prior information about the semantic resources. The system takes as input a natural language query, translates it into a set of logical queries, which are then answered by consulting and aggregating information derived from multiple heterogeneous semantic sources

    Towards improved performance and interoperability in distributed and physical union catalogues

    Get PDF
    Purpose of this paper: This paper details research undertaken to determine the key differences in the performance of certain centralised (physical) and distributed (virtual) bibliographic catalogue services, and to suggest strategies for improving interoperability and performance in, and between, physical and virtual models. Design/methodology/approach: Methodically defined searches of a centralised catalogue service and selected distributed catalogues were conducted using the Z39.50 information retrieval protocol, allowing search types to be semantically defined. The methodology also entailed the use of two workshops comprising systems librarians and cataloguers to inform suggested strategies for improving performance and interoperability within both environments. Findings: Technical interoperability was permitted easily between centralised and distributed models, however the various individual configurations permitted only limited semantic interoperability. Significant prescription in cataloguing and indexing guidelines, greater participation in the Program for Collaborative Cataloging (PCC), consideration of future 'FRBR' migration, and greater disclosure to end users are some of the suggested strategies to improve performance and semantic interoperability. Practical implications: This paper informs the LIS research community and union catalogue administrators, but also has numerous practical implications for those establishing distributed systems based on Z39.50 and SRW, as well as those establishing centralised systems. What is original/value of the paper?: The paper moves the discussion of Z39.50 based systems away from anecdotal evidence and provides recommendations based on testing and is intimately informed by the UK cataloguing and systems librarian community

    Study of Tools Interoperability

    Get PDF
    Interoperability of tools usually refers to a combination of methods and techniques that address the problem of making a collection of tools to work together. In this study we survey different notions that are used in this context: interoperability, interaction and integration. We point out relation between these notions, and how it maps to the interoperability problem. We narrow the problem area to the tools development in academia. Tools developed in such environment have a small basis for development, documentation and maintenance. We scrutinise some of the problems and potential solutions related with tools interoperability in such environment. Moreover, we look at two tools developed in the Formal Methods and Tools group1, and analyse the use of different integration techniques
    corecore