901 research outputs found

    Toward Non-security Failures as a Predictor of Security Faults and Failures

    Full text link
    Abstract. In the search for metrics that can predict the presence of vulnerabilities early in the software life cycle, there may be some benefit to choosing metrics from the non-security realm. We analyzed non-security and security failure data reported for the year 2007 of a Cisco software system. We used non-security failure reports as input variables into a classification and regression tree (CART) model to determine the probability that a component will have at least one vulnerability. Using CART, we ranked all of the system components in descending order of their probabilities and found that 57 % of the vulnerable components were in the top nine percent of the total component ranking, but with a 48 % false positive rate. The results indicate that non-security failures can be used as one of the input variables for security-related prediction models

    Investigation of mechanistic deterioration modeling for bridge design and management

    Get PDF
    2017 Spring.Includes bibliographical references.The ongoing deterioration of highway bridges in Colorado dictates that an effective method for allocating limited management resources be developed. In order to predict bridge deterioration in advance, mechanistic models which analyze the physical processes causing deterioration are capable of supplementing purely statistical models and addressing limitations associated with bridge inspection data and statistical methods. A review of existing analytical models in the literature was conducted. Due to its prevalence throughout the state of Colorado and frequent need for repair, corrosion-induced cracking of reinforced concrete (RC) decks was selected as the mode of deterioration for further study. A mechanistic model was developed to predict corrosion and concrete cracking as a function of material and environmental inputs. The model was modified to include the effects of epoxy-coated rebar, waterproofing membranes, asphalt overlays, joint deterioration, and deck maintenance. Probabilistic inputs were applied to simulate inherent randomness associated with deterioration. Model results showed that mechanistic models may be able to address limitations of statistical models and provide a more accurate and precise prediction of bridge degradation in advance. Preventative maintenance may provide longer bridge deck service life with fewer total maintenance actions than current methods. However, experimental study of specific deterioration processes and additional data collection are needed to validate model predictions. Maintenance histories of existing bridges are necessary to predicting bridge deterioration and improving bridge design and management in the future

    Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners (Second Edition)

    Get PDF
    Probabilistic Risk Assessment (PRA) is a comprehensive, structured, and logical analysis method aimed at identifying and assessing risks in complex technological systems for the purpose of cost-effectively improving their safety and performance. NASA's objective is to better understand and effectively manage risk, and thus more effectively ensure mission and programmatic success, and to achieve and maintain high safety standards at NASA. NASA intends to use risk assessment in its programs and projects to support optimal management decision making for the improvement of safety and program performance. In addition to using quantitative/probabilistic risk assessment to improve safety and enhance the safety decision process, NASA has incorporated quantitative risk assessment into its system safety assessment process, which until now has relied primarily on a qualitative representation of risk. Also, NASA has recently adopted the Risk-Informed Decision Making (RIDM) process [1-1] as a valuable addition to supplement existing deterministic and experience-based engineering methods and tools. Over the years, NASA has been a leader in most of the technologies it has employed in its programs. One would think that PRA should be no exception. In fact, it would be natural for NASA to be a leader in PRA because, as a technology pioneer, NASA uses risk assessment and management implicitly or explicitly on a daily basis. NASA has probabilistic safety requirements (thresholds and goals) for crew transportation system missions to the International Space Station (ISS) [1-2]. NASA intends to have probabilistic requirements for any new human spaceflight transportation system acquisition. Methods to perform risk and reliability assessment in the early 1960s originated in U.S. aerospace and missile programs. Fault tree analysis (FTA) is an example. It would have been a reasonable extrapolation to expect that NASA would also become the world leader in the application of PRA. That was, however, not to happen. Early in the Apollo program, estimates of the probability for a successful roundtrip human mission to the moon yielded disappointingly low (and suspect) values and NASA became discouraged from further performing quantitative risk analyses until some two decades later when the methods were more refined, rigorous, and repeatable. Instead, NASA decided to rely primarily on the Hazard Analysis (HA) and Failure Modes and Effects Analysis (FMEA) methods for system safety assessment

    Condition Assessment Models for Sewer Pipelines

    Get PDF
    Underground pipeline system is a complex infrastructure system that has significant impact on social, environmental and economic aspects. Sewer pipeline networks are considered to be an extremely expensive asset. This study aims to develop condition assessment models for sewer pipeline networks. Seventeen factors affecting the condition of sewer network were considered for gravity pipelines in addition to the operating pressure for pressurized pipelines. Two different methodologies were adopted for models’ development. The first method by using an integrated Fuzzy Analytic Network Process (FANP) and Monte-Carlo simulation and the second method by using FANP, fuzzy set theory (FST) and Evidential Reasoning (ER). The models’ output is the assessed pipeline condition. In order to collect the necessary data for developing the models, questionnaires were distributed among experts in sewer pipelines in the state of Qatar. In addition, actual data for an existing sewage network in the state of Qatar was used to validate the models’ outputs. The “Ground Disturbance” factor was found to be the most influential factor followed by the “Location” factor with a weight of 10.6% and 9.3% for pipelines under gravity and 8.8% and 8.6% for pipelines under pressure, respectively. On the other hand, the least affecting factor was the “Length” followed by “Diameter” with weights of 2.2% and 2.5% for pipelines under gravity and 2.5% and 2.6% for pipelines under pressure. The developed models were able to satisfactorily assess the conditions of deteriorating sewer pipelines with an average validity of approximately 85% for the first approach and 86% for the second approach. The developed models are expected to be a useful tool for decision makers to properly plan for their inspections and provide effective rehabilitation of sewer networks.1)- NPRP grant # (NPRP6-357-2-150) from the QatarNational Research Fund (Member of Qatar Foundation) 2)-Tarek Zayed, Professor of Civil Engineeringat Concordia University for his support in the analysis part, the Public Works 3)-Authority of Qatar (ASHGAL) for their support in the data collection

    Scheduling and shop floor control in commercial airplane manufacturing

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2005.Includes bibliographical references (p. 73-75).Boeing is the premier manufacturer of commercial jetliners and a leader in defense and space systems. Competition in commercial aircraft production is increasing and in order to retain their competitive position, Boeing must strive to improve their operations by reducing costs. Boeing factories today still schedule and monitor the shop floor much as they have for the past 100 years. This thesis compares and contrasts several different methods for shop floor control and scheduling including Boeing's barcharts, Toyota production system, critical chain, and dynamic scheduling. Each system is will be analyzed with respect to how it handles variability in labor output required and how that affects which products are typically made under each system. In additional to qualitative comparisons, discrete event simulations comparing the various strategies will be presented. Areas for future simulation study are also discussed. The recommended approach for commercial airplane assembly is critical chain. A suggested implementation plan is presented along with methods to ease acceptance.by Vikram Neal Sahney.S.M.M.B.A

    Sustainable government policy as silver bullet to sustainable business incubation performance In Nigeria

    Get PDF
    Business incubation has variously been described as a support programme that assist the early-stage entrepreneurs to develop and stay on their own. Furthermore, business incubation programme has been acknowledged as an economic development tool most countries globally adopted. The aim of this study is to examine the contribution of government policy on the relationship between the critical success factors (CSFs) and incubator performance in Nigeria. The questionnaire method of data collection was used to gather 113 usable questionnaires from incubatees in Nigeria’s business incubators. Structural Equation Modeling (SEM) was performed to determine the result using the Partial Least Square (PLS) Software. Government policy as a moderator did not show a significant moderation relationship between the CSF and incubator performance

    Quantification of uncertainty in probabilistic safety analysis

    Get PDF
    This thesis develops methods for quantification and interpretation of uncertainty in probabilistic safety analysis, focussing on fault trees. The output of a fault tree analysis is, usually, the probability of occurrence of an undesirable event (top event) calculated using the failure probabilities of identified basic events. The standard method for evaluating the uncertainty distribution is by Monte Carlo simulation, but this is a computationally intensive approach to uncertainty estimation and does not, readily, reveal the dominant reasons for the uncertainty. A closed form approximation for the fault tree top event uncertainty distribution, for models using only lognormal distributions for model inputs, is developed in this thesis. Its output is compared with the output from two sampling based approximation methods; standard Monte Carlo analysis, and Wilks’ method, which is based on order statistics using small sample sizes. Wilks’ method can be used to provide an upper bound for the percentiles of top event distribution, and is computationally cheap. The combination of the lognormal approximation and Wilks’ Method can be used to give, respectively, the overall shape and high confidence on particular percentiles of interest. This is an attractive, practical option for evaluation of uncertainty in fault trees and, more generally, uncertainty in certain multilinear models. A new practical method of ranking uncertainty contributors in lognormal models is developed which can be evaluated in closed form, based on cutset uncertainty. The method is demonstrated via examples, including a simple fault tree model and a model which is the size of a commercial PSA model for a nuclear power plant. Finally, quantification of “hidden uncertainties” is considered; hidden uncertainties are those which are not typically considered in PSA models, but may contribute considerable uncertainty to the overall results if included. A specific example of the inclusion of a missing uncertainty is explained in detail, and the effects on PSA quantification are considered. It is demonstrated that the effect on the PSA results can be significant, potentially permuting the order of the most important cutsets, which is of practical concern for the interpretation of PSA models. Finally, suggestions are made for the identification and inclusion of further hidden uncertainties.Open Acces

    Use of Petri Nets to Manage Civil Engineering Infrastructures

    Get PDF
    Over the last years there has been a shift, in the most developed countries, in investment and efforts within the construction sector. On the one hand, these countries have built infrastructures able to respond to current needs over the last decades, reducing the need for investments in new infrastructures now and in the near future. On the other hand, most of the infrastructures present clear signs of deterioration, making it fundamental to invest correctly in their recovery. The ageing of infrastructure together with the scarce budgets available for maintenance and rehabilitation are the main reasons for the development of decision support tools, as a mean to maximize the impact of investments. The objective of the present work is to develop a methodology for optimizing maintenance strategies, considering the available information on infrastructure degradation and the impact of maintenance in economic terms and loss of functionality, making possible the implementation of a management system transversal to different types of civil engineering infrastructures. The methodology used in the deterioration model is based on the concept of timed Petri nets. The maintenance model was built from the deterioration model, including the inspection, maintenance and renewal processes. The optimization of maintenance is performed through genetic algorithms. The deterioration and maintenance model was applied to components of two types of infrastructure: bridges (pre-stressed concrete decks and bearings) and buildings (ceramic claddings). The complete management system was used to analyse a section of a road network. All examples are based on Portuguese data

    Development of failure frequency, shelter and escape models for dense phase carbon dioxide pipelines

    Get PDF
    PhD ThesisCarbon Capture and Storage (CCS) is recognised as one of a suite of solutions required to reduce carbon dioxide (CO2) emissions into the atmosphere and prevent catastrophic global climate change. In CCS schemes, CO2 is captured from large scale industrial emitters and transported, predominantly by pipeline, to geological sites, such as depleted oil or gas fields or saline aquifers, where it is injected into the rock formation for storage. The requirement to develop a robust Quantitative Risk Assessment (QRA) methodology for high pressure CO2 pipelines has been recognised as critical to the implementation of CCS. Consequently, failure frequency and consequence models are required that are appropriate for high pressure CO2 pipelines. This thesis addresses key components from both the failure frequency and consequence parts of the QRA methodology development. On the failure frequency side, a predictive model to estimate the failure frequency of a high pressure CO2 pipeline due to third party external interference has been developed. The model has been validated for the design requirements of high pressure CO2 pipelines by showing that it is applicable to thick wall linepipe. Additional validation has been provided through comparison between model predictions, historical data and the existing industry standard failure frequency model, FFREQ. On the consequences side, models have been developed to describe the impact of CO2 on people sheltering inside buildings and those attempting to escape on foot, during a pipeline release event. The models have been coupled to the results of a dispersion analysis from a pipeline release under different environmental conditions to demonstrate how the consequence data required for input into the QRA can be determined. In each model both constant and changing external concentrations of CO2 have been considered and the toxic effects on people predicted. It has been shown that the models can be used to calculate safe distances in the event of a CO2 pipeline release.National Gri
    • …
    corecore