135,794 research outputs found
Predicting Exploitation of Disclosed Software Vulnerabilities Using Open-source Data
Each year, thousands of software vulnerabilities are discovered and reported
to the public. Unpatched known vulnerabilities are a significant security risk.
It is imperative that software vendors quickly provide patches once
vulnerabilities are known and users quickly install those patches as soon as
they are available. However, most vulnerabilities are never actually exploited.
Since writing, testing, and installing software patches can involve
considerable resources, it would be desirable to prioritize the remediation of
vulnerabilities that are likely to be exploited. Several published research
studies have reported moderate success in applying machine learning techniques
to the task of predicting whether a vulnerability will be exploited. These
approaches typically use features derived from vulnerability databases (such as
the summary text describing the vulnerability) or social media posts that
mention the vulnerability by name. However, these prior studies share multiple
methodological shortcomings that inflate predictive power of these approaches.
We replicate key portions of the prior work, compare their approaches, and show
how selection of training and test data critically affect the estimated
performance of predictive models. The results of this study point to important
methodological considerations that should be taken into account so that results
reflect real-world utility
Recommended from our members
Operational snow modeling: Addressing the challenges of an energy balance model for National Weather Service forecasts
Prediction of snowmelt has become a critical issue in much of the western United States given the increasing demand for water supply, changing snow cover patterns, and the subsequent requirement of optimal reservoir operation. The increasing importance of hydrologic predictions necessitates that traditional forecasting systems be re-evaluated periodically to assure continued evolution of the operational systems given scientific advancements in hydrology. The National Weather Service (NWS) SNOW17, a conceptually based model used for operational prediction of snowmelt, has been relatively unchanged for decades. In this study, the Snow-Atmosphere-Soil Transfer (SAST) model, which employs the energy balance method, is evaluated against the SNOW17 for the simulation of seasonal snowpack (both accumulation and melt) and basin discharge. We investigate model performance over a 13-year period using data from two basins within the Reynolds Creek Experimental Watershed located in southwestern Idaho. Both models are coupled to the NWS runoff model [SACramento Soil Moisture Accounting model (SACSMA)] to simulate basin streamflow. Results indicate that while in many years simulated snowpack and streamflow are similar between the two modeling systems, the SAST more often overestimates SWE during the spring due to a lack of mid-winter melt in the model. The SAST also had more rapid spring melt rates than the SNOW17, leading to larger errors in the timing and amount of discharge on average. In general, the simpler SNOW17 performed consistently well, and in several years, better than, the SAST model. Input requirements and related uncertainties, and to a lesser extent calibration, are likely to be primary factors affecting the implementation of an energy balance model in operational streamflow prediction. © 2008 Elsevier B.V. All rights reserved
Proceedings of the 2011 New York Workshop on Computer, Earth and Space Science
The purpose of the New York Workshop on Computer, Earth and Space Sciences is
to bring together the New York area's finest Astronomers, Statisticians,
Computer Scientists, Space and Earth Scientists to explore potential synergies
between their respective fields. The 2011 edition (CESS2011) was a great
success, and we would like to thank all of the presenters and participants for
attending. This year was also special as it included authors from the upcoming
book titled "Advances in Machine Learning and Data Mining for Astronomy". Over
two days, the latest advanced techniques used to analyze the vast amounts of
information now available for the understanding of our universe and our planet
were presented. These proceedings attempt to provide a small window into what
the current state of research is in this vast interdisciplinary field and we'd
like to thank the speakers who spent the time to contribute to this volume.Comment: Author lists modified. 82 pages. Workshop Proceedings from CESS 2011
in New York City, Goddard Institute for Space Studie
The future of laboratory medicine - A 2014 perspective.
Predicting the future is a difficult task. Not surprisingly, there are many examples and assumptions that have proved to be wrong. This review surveys the many predictions, beginning in 1887, about the future of laboratory medicine and its sub-specialties such as clinical chemistry and molecular pathology. It provides a commentary on the accuracy of the predictions and offers opinions on emerging technologies, economic factors and social developments that may play a role in shaping the future of laboratory medicine
CASP-DM: Context Aware Standard Process for Data Mining
We propose an extension of the Cross Industry Standard Process for Data
Mining (CRISPDM) which addresses specific challenges of machine learning and
data mining for context and model reuse handling. This new general
context-aware process model is mapped with CRISP-DM reference model proposing
some new or enhanced outputs
- …