4,641 research outputs found
Static analysis-based approaches for secure software development
Software security is a matter of major concern for software development enterprises that wish to deliver highly secure software products to their customers. Static analysis is considered one of the most effective mechanisms for adding security to software products. The multitude of static analysis tools that are available provide a large number of raw results that may contain security-relevant information, which may be useful for the production of secure software. Several mechanisms that can facilitate the production of both secure and reliable software applications have been proposed over the years. In this paper, two such mechanisms, particularly the vulnerability prediction models (VPMs) and the optimum checkpoint recommendation (OCR) mechanisms, are theoretically examined, while their potential improvement by using static analysis is also investigated. In particular, we review the most significant contributions regarding these mechanisms, identify their most important open issues, and propose directions for future research, emphasizing on the potential adoption of static analysis for addressing the identified open issues. Hence, this paper can act as a reference for researchers that wish to contribute in these subfields, in order to gain solid understanding of the existing solutions and their open issues that require further research
MLOps: A Review
Recently, Machine Learning (ML) has become a widely accepted method for
significant progress that is rapidly evolving. Since it employs computational
methods to teach machines and produce acceptable answers. The significance of
the Machine Learning Operations (MLOps) methods, which can provide acceptable
answers for such problems, is examined in this study. To assist in the creation
of software that is simple to use, the authors research MLOps methods. To
choose the best tool structure for certain projects, the authors also assess
the features and operability of various MLOps methods. A total of 22 papers
were assessed that attempted to apply the MLOps idea. Finally, the authors
admit the scarcity of fully effective MLOps methods based on which advancements
can self-regulate by limiting human engagement
The Software Vulnerability Ecosystem: Software Development In The Context Of Adversarial Behavior
Software vulnerabilities are the root cause of many computer system security fail- ures. This dissertation addresses software vulnerabilities in the context of a software lifecycle, with a particular focus on three stages: (1) improving software quality dur- ing development; (2) pre- release bug discovery and repair; and (3) revising software as vulnerabilities are found.
The question I pose regarding software quality during development is whether long-standing software engineering principles and practices such as code reuse help or hurt with respect to vulnerabilities. Using a novel data-driven analysis of large databases of vulnerabilities, I show the surprising result that software quality and software security are distinct. Most notably, the analysis uncovered a counterintu- itive phenomenon, namely that newly introduced software enjoys a period with no vulnerability discoveries, and further that this âHoneymoon Effectâ (a term I coined) is well-explained by the unfamiliarity of the code to malicious actors. An important consequence for code reuse, intended to raise software quality, is that protections inherent in delays in vulnerability discovery from new code are reduced.
The second question I pose is the predictive power of this effect. My experimental design exploited a large-scale open source software system, Mozilla Firefox, in which two development methodologies are pursued in parallel, making that the sole variable in outcomes. Comparing the methodologies using a novel synthesis of data from vulnerability databases, These results suggest that the rapid-release cycles used in agile software development (in which new software is introduced frequently) have a vulnerability discovery rate equivalent to conventional development.
Finally, I pose the question of the relationship between the intrinsic security of software, stemming from design and development, and the ecosystem into which the software is embedded and in which it operates. I use the early development
lifecycle to examine this question, and again use vulnerability data as the means of answering it. Defect discovery rates should decrease in a purely intrinsic model, with software maturity making vulnerabilities increasingly rare. The data, which show that vulnerability rates increase after a delay, contradict this. Software security therefore must be modeled including extrinsic factors, thus comprising an ecosystem
Recommended from our members
Intelligent decision support for maintenance: an overview and future trends
The changing nature of manufacturing, in recent years, is evident in industryâs willingness to adopt network-connected intelligent machines in their factory development plans. A number of joint corporate/government initiatives also describe and encourage the adoption of Artificial Intelligence (AI) in the operation and management of production lines. Machine learning will have a significant role to play in the delivery of automated and intelligently supported maintenance decision-making systems. While e-maintenance practice provides aframework for internet-connected operation of maintenance practice the advent of IoT has changed the scale of internetworking and new architectures and tools are needed. While advances in sensors and sensor fusion techniques have been significant in recent years, the possibilities brought by IoT create new challenges in the scale of data and its analysis. The development of audit trail style practice for the collection of data and the provision of acomprehensive framework for its processing, analysis and use should be avaluable contribution in addressing the new data analytics challenges for maintenance created by internet connected devices. This paper proposes that further research should be conducted into audit trail collection of maintenance data, allowing future systems to enable âHuman in the loopâ interactions
D3.2 Cost Concept Model and Gateway Specification
This document introduces a Framework supporting the implementation of a cost concept model against which current and future cost models for curating digital assets can be benchmarked. The value built into this cost concept model leverages the comprehensive engagement by the 4C project with various user communities and builds upon our understanding of the requirements, drivers, obstacles and objectives that various stakeholder groups have relating to digital curation. Ultimately, this concept model should provide a critical input to the development and refinement of cost models as well as helping to ensure that the curation and preservation solutions and services that will inevitably arise from the commercial sector as âsupplyâ respond to a much better understood âdemandâ for cost-effective and relevant tools. To meet acknowledged gaps in current provision, a nested model of curation which addresses both costs and benefits is provided. The goal of this task was not to create a single, functionally implementable cost modelling application; but rather to design a model based on common concepts and to develop a generic gateway specification that can be used by future model developers, service and solution providers, and by researchers in follow-up research and development projects.<p></p>
The Framework includes:<p></p>
⢠A Cost Concept Modelâwhich defines the core concepts that should be included in curation costs models;<p></p>
⢠An Implementation Guideâfor the cost concept model that provides guidance and proposes questions that should be considered when developing new cost models and refining existing cost models;<p></p>
⢠A Gateway Specification Templateâwhich provides standard metadata for each of the core cost concepts and is intended for use by future model developers, model users, and service and solution providers to promote interoperability;<p></p>
⢠A Nested Model for Digital Curationâthat visualises the core concepts, demonstrates how they interact and places them into context visually by linking them to A Cost and Benefit Model for Curation.<p></p>
This Framework provides guidance for data collection and associated calculations in an operational context but will also provide a critical foundation for more strategic thinking around curation such as the Economic Sustainability Reference Model (ESRM).<p></p>
Where appropriate, definitions of terms are provided, recommendations are made, and examples from existing models are used to illustrate the principles of the framework
- âŚ