93,380 research outputs found
Predicting Exploitation of Disclosed Software Vulnerabilities Using Open-source Data
Each year, thousands of software vulnerabilities are discovered and reported
to the public. Unpatched known vulnerabilities are a significant security risk.
It is imperative that software vendors quickly provide patches once
vulnerabilities are known and users quickly install those patches as soon as
they are available. However, most vulnerabilities are never actually exploited.
Since writing, testing, and installing software patches can involve
considerable resources, it would be desirable to prioritize the remediation of
vulnerabilities that are likely to be exploited. Several published research
studies have reported moderate success in applying machine learning techniques
to the task of predicting whether a vulnerability will be exploited. These
approaches typically use features derived from vulnerability databases (such as
the summary text describing the vulnerability) or social media posts that
mention the vulnerability by name. However, these prior studies share multiple
methodological shortcomings that inflate predictive power of these approaches.
We replicate key portions of the prior work, compare their approaches, and show
how selection of training and test data critically affect the estimated
performance of predictive models. The results of this study point to important
methodological considerations that should be taken into account so that results
reflect real-world utility
Identification and Importance of the Technological Risks of Open Source Software in the Enterprise Adoption Context
Open source software (OSS) has reshaped and remodeled various layers of the organizational ecosystem, becoming an important strategic asset for enterprises. Still, many enterprises are reluctant to adopt OSS. Knowledge about technological risks and their importance for IT executives is still under researched. We aim to identify the technological risks and their importance for OSS adoption during the risk identification phase in the enterprise context. We conducted an extensive literature review, identifying 34 risk factors from 88 papers, followed by an online survey of 115 IT executives to study the risk factors\u27 importance. Our results will be very valuable for practitioners to use when evaluating, assessing and calculating the risks related to OSS product adoption. Also, researchers can use it as a base for future studies to expand current theoretical understanding of the OSS phenomenon related to IT risk management
Recommended from our members
A Framework for the Systematic Evaluation of Malware Forensic Tools
Following a series of high profile miscarriages of justice linked to questionable expert evidence, the post of the Forensic Science Regulator was created in 2008 with a remit to improve the standard of practitioner competences and forensic procedures. It has since moved to incorporate a greater level of scientific practice in these areas, as used in the production of expert evidence submitted to the UK Criminal Justice System. Accreditation to their codes of practice and conduct will become mandatory for all forensic practitioners by October 2017. A variety of challenges with expert evidence are explored and linked to a lack of a scientific methodology underpinning the processes followed. In particular, the research focuses upon investigations where malicious software (‘malware’) has been identified.
A framework, called the ‘Malware Analysis Tool Evaluation Framework’ (MATEF), has been developed to address this lack of methodology to evaluate software tools used during investigations involving malware. A prototype implementation of the framework was used to evaluate two tools against a population of over 350,000 samples of malware. Analysis of the findings indicated that the choice of tool could impact on the number of artefacts observed in malware forensic investigations as well as identifying the optimal execution time for a given tool when observing malware artefacts.
Three different measures were used to evaluate the framework. The first of these evaluated the framework against the requirements and determined that these were largely met. Where the requirements were not met these are attributed to matters either outside scope or the fledgling nature of the research. Another measure used to evaluate the framework was to consider its performance in terms of speed and resource utilisation. This identified scope for improvement in terms of the time to complete a test and the need for more economical use of disk space. Finally, the framework provides a scientific means to evaluate malware analysis tools, hence addressing the Research Question subject to the level at which ground truth is established.
A number of contributions are produced as the output of this work. First there is confirmation for the case for a lack of trusted practice in the field of malware forensics. Second, the MATEF itself, as it facilitates the production of empirical evidence of a tool’s ability to detect malware artefacts. A third contribution is a set of requirements for establishing trusted practice in the use of malware artefact detection tools. Finally, empirical evidence that supports both the notion that the choice of tool can impact on the number of artefacts observed in malware forensic investigations as well as identifying the optimal execution time for a given tool when observing malware artefacts
- …