56,803 research outputs found

    Vulnerability discovery in multiple version software systems: open source and commercial software systems

    Get PDF
    Department Head: L. Darrell Whitley.2007 Summer.Includes bibliographical references (pages 80-83).The vulnerability discovery process for a program describes the rate at which the vulnerabilities are discovered. A model of the discovery process can be used to estimate the number of vulnerabilities likely to be discovered in the near future. Past studies have considered vulnerability discovery only for individual software versions, without considering the impact of shared code among successive versions and the evolution of source code. These affecting factors in vulnerability discovery process need to be taken into account estimate the future software vulnerability discovery trend more accurately. This thesis examines possible approaches for taking these factors into account in the previous works. We implemented these factors on vulnerability discovery process. We examine a new approach for quantitatively vulnerability discovery process, based on shared source code measurements among multiple version software system. The applicability of the approach is examined using Apache HTTP Web server and Mysql DataBase Management System (DBMS). The result of this approach shows better goodness of fit than fitting result in the previous researches. Using this revised software vulnerability discovery process, the superposition effect which is an unexpected vulnerability discovery in the previous researches could be determined by software discovery model. The multiple software vulnerability discovery model (MVDM) shows that vulnerability discovery rate is different with single vulnerability discovery model's (SVDM) discovery rate because of newly considered factors. From these result, we create and applied new SVDM for open source and commercial software. This single vulnerability process is examined, and the model testing result shows that SVDM can be an alternative modeling. The modified vulnerability discovery model will be presented for supporting previous researches' weakness, and the theoretical modeling will be discuss for more accurate explanation

    Some Guidelines for Risk Assessment of Vulnerability Discovery Processes

    Get PDF
    Software vulnerabilities can be defined as software faults, which can be exploited as results of security attacks. Security researchers have used data from vulnerability databases to study trends of discovery of new vulnerabilities or propose models for fitting the discovery times and for predicting when new vulnerabilities may be discovered. Estimating the discovery times for new vulnerabilities is useful both for vendors as well as the end-users as it can help with resource allocation strategies over time. Among the research conducted on vulnerability modeling, only a few studies have tried to provide a guideline about which model should be used in a given situation. In other words, assuming the vulnerability data for a software is given, the research questions are the following: Is there any feature in the vulnerability data that could be used for identifying the most appropriate models for that dataset? What models are more accurate for vulnerability discovery process modeling? Can the total number of publicly-known exploited vulnerabilities be predicted using all vulnerabilities reported for a given software? To answer these questions, we propose to characterize the vulnerability discovery process using several common software reliability/vulnerability discovery models, also known as Software Reliability Models (SRMs)/Vulnerability Discovery Models (VDMs). We plan to consider different aspects of vulnerability modeling including curve fitting and prediction. Some existing SRMs/VDMs lack accuracy in the prediction phase. To remedy the situation, three strategies are considered: (1) Finding a new approach for analyzing vulnerability data using common models. In other words, we examine the effect of data manipulation techniques (i.e. clustering, grouping) on vulnerability data, and investigate whether it leads to more accurate predictions. (2) Developing a new model that has better curve filling and prediction capabilities than current models. (3) Developing a new method to predict the total number of publicly-known exploited vulnerabilities using all vulnerabilities reported for a given software. The dissertation is intended to contribute to the science of software reliability analysis and presents some guidelines for vulnerability risk assessment that could be integrated as part of security tools, such as Security Information and Event Management (SIEM) systems

    Reactive point processes: A new approach to predicting power failures in underground electrical systems

    Full text link
    Reactive point processes (RPPs) are a new statistical model designed for predicting discrete events in time based on past history. RPPs were developed to handle an important problem within the domain of electrical grid reliability: short-term prediction of electrical grid failures ("manhole events"), including outages, fires, explosions and smoking manholes, which can cause threats to public safety and reliability of electrical service in cities. RPPs incorporate self-exciting, self-regulating and saturating components. The self-excitement occurs as a result of a past event, which causes a temporary rise in vulner ability to future events. The self-regulation occurs as a result of an external inspection which temporarily lowers vulnerability to future events. RPPs can saturate when too many events or inspections occur close together, which ensures that the probability of an event stays within a realistic range. Two of the operational challenges for power companies are (i) making continuous-time failure predictions, and (ii) cost/benefit analysis for decision making and proactive maintenance. RPPs are naturally suited for handling both of these challenges. We use the model to predict power-grid failures in Manhattan over a short-term horizon, and to provide a cost/benefit analysis of different proactive maintenance programs.Comment: Published at http://dx.doi.org/10.1214/14-AOAS789 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Seafloor characterization using airborne hyperspectral co-registration procedures independent from attitude and positioning sensors

    Get PDF
    The advance of remote-sensing technology and data-storage capabilities has progressed in the last decade to commercial multi-sensor data collection. There is a constant need to characterize, quantify and monitor the coastal areas for habitat research and coastal management. In this paper, we present work on seafloor characterization that uses hyperspectral imagery (HSI). The HSI data allows the operator to extend seafloor characterization from multibeam backscatter towards land and thus creates a seamless ocean-to-land characterization of the littoral zone

    OS diversity for intrusion tolerance: Myth or reality?

    Get PDF
    One of the key benefits of using intrusion-tolerant systems is the possibility of ensuring correct behavior in the presence of attacks and intrusions. These security gains are directly dependent on the components exhibiting failure diversity. To what extent failure diversity is observed in practical deployment depends on how diverse are the components that constitute the system. In this paper we present a study with operating systems (OS) vulnerability data from the NIST National Vulnerability Database. We have analyzed the vulnerabilities of 11 different OSes over a period of roughly 15 years, to check how many of these vulnerabilities occur in more than one OS. We found this number to be low for several combinations of OSes. Hence, our analysis provides a strong indication that building a system with diverse OSes may be a useful technique to improve its intrusion tolerance capabilities
    • …
    corecore