38 research outputs found

    Review of Quantitative Software Reliability Methods

    Get PDF
    The current U.S. Nuclear Regulatory Commission (NRC) licensing process for digital systems rests on deterministic engineering criteria. In its 1995 probabilistic risk assessment (PRA) policy statement, the Commission encouraged the use of PRA technology in all regulatory matters to the extent supported by the state-of-the-art in PRA methods and data. Although many activities have been completed in the area of risk-informed regulation, the risk-informed analysis process for digital systems has not yet been satisfactorily developed. Since digital instrumentation and control (I&C) systems are expected to play an increasingly important role in nuclear power plant (NPP) safety, the NRC established a digital system research plan that defines a coherent set of research programs to support its regulatory needs. One of the research programs included in the NRC's digital system research plan addresses risk assessment methods and data for digital systems. Digital I&C systems have some unique characteristics, such as using software, and may have different failure causes and/or modes than analog I&C systems; hence, their incorporation into NPP PRAs entails special challenges. The objective of the NRC's digital system risk research is to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems into NPP PRAs, and (2) using information on the risks of digital systems to support the NRC's risk-informed licensing and oversight activities. For several years, Brookhaven National Laboratory (BNL) has worked on NRC projects to investigate methods and tools for the probabilistic modeling of digital systems, as documented mainly in NUREG/CR-6962 and NUREG/CR-6997. However, the scope of this research principally focused on hardware failures, with limited reviews of software failure experience and software reliability methods. NRC also sponsored research at the Ohio State University investigating the modeling of digital systems using dynamic PRA methods. These efforts, documented in NUREG/CR-6901, NUREG/CR-6942, and NUREG/CR-6985, included a functional representation of the system's software but did not explicitly address failure modes caused by software defects or by inadequate design requirements. An important identified research need is to establish a commonly accepted basis for incorporating the behavior of software into digital I&C system reliability models for use in PRAs. To address this need, BNL is exploring the inclusion of software failures into the reliability models of digital I&C systems, such that their contribution to the risk of the associated NPP can be assessed

    A case study in estimating avionics availability from field reliability data

    Get PDF
    Under incentivized contractual mechanisms such as availability-based contracts the support service provider and its customer must share a common understanding of equipment reliability baselines. Emphasis is typically placed on the Information Technology-related solutions for capturing, processing and sharing vast amounts of data. In the case of repairable fielded items scant attention is paid to the pitfalls within the modelling assumptions that are often endorsed uncritically, and seldom made explicit during field reliability data analysis. This paper presents a case study in which good practices in reliability data analysis are identified and applied to real-world data with the aim of supporting the effective execution of a defence avionics availability-based contract. The work provides practical guidance on how to make a reasoned choice between available models and methods based on the intelligent exploration of the data available in practical industrial applications

    Fault detection and correction modeling of software systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Research reports: 1991 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    The basic objectives of the programs, which are in the 28th year of operation nationally, are: (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate an exchange of ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of the participants' institutions; and (4) to contribute to the research objectives of the NASA Centers. The faculty fellows spent 10 weeks at MSFC engaged in a research project compatible with their interests and background and worked in collaboration with a NASA/MSFC colleague. This is a compilation of their research reports for summer 1991

    Bi-Directional Testing for Change Point Detection in Poisson Processes

    Full text link
    Point processes often serve as a natural language to chronicle an event\u27s temporal evolution, and significant changes in the flow, synonymous with non-stationarity, are usually triggered by assignable and frequently preventable causes, often heralding devastating ramifications. Examples include amplified restlessness of a volcano, increased frequencies of airplane crashes, hurricanes, mining mishaps, among others. Guessing these time points of changes, therefore, merits utmost care. Switching the way time traditionally propagates, we posit a new genre of bidirectional tests which, despite a frugal construct, prove to be exceedingly efficient in culling out non-stationarity under a wide spectrum of environments. A journey surveying a lavish class of intensities, ranging from the tralatitious power laws to the deucedly germane rough steps, tracks the established unidirectional forward and backward test\u27s evolution into a p-value induced dual bidirectional test, the best member of the proffered category. Niched within a hospitable Poissonian framework, this dissertation, through a prudent harnessing of the bidirectional category\u27s classification prowess, incites a refreshing alternative to estimating changes plaguing a soporific flow, by conducting a sequence of tests. Validation tools, predominantly graphical, rid the structure of forbidding technicalities, aggrandizing the swath of applicability. Extensive simulations, conducted especially under hostile premises of hard non-stationarity detection, document minimal estimation error and reveal the algorithm\u27s obstinate versatility at its most unerring

    Wildland Fire Mid-story: A generative modeling approach for representative fuels

    Full text link
    Computational models for understanding and predicting fire in wildland and managed lands are increasing in impact. Data characterizing the fuels and environment is needed to continue improvement in the fidelity and reliability of fire outcomes. This paper addresses a gap in the characterization and population of mid-story fuels, which are not easily observable either through traditional survey, where data collection is time consuming, or with remote sensing, where the mid-story is typically obscured by forest canopy. We present a methodology to address populating a mid-story using a generative model for fuel placement that captures key concepts of spatial density and heterogeneity that varies by regional or local environmental conditions. The advantage of using a parameterized generative model is the ability to calibrate (or `tune') the generated fuels based on comparison to limited observation datasets or with expert guidance, and we show how this generative model can balance information from these sources to capture the essential characteristics of the wildland fuels environment. In this paper we emphasize the connection of terrestrial LiDAR (TLS) as the observations used to calibrate of the generative model, as TLS is a promising method for supporting forest fuels assessment. Code for the methods in this paper is available.Comment: 21 pages 9 figures. Code available at: https://github.com/LANL/fuelsge

    Managed access dependability for critical services in wireless inter domain environment

    Get PDF
    The Information and Communications Technology (ICT) industry has through the last decades changed and still continues to affect the way people interact with each other and how they access and share information, services and applications in a global market characterized by constant change and evolution. For a networked and highly dynamic society, with consumers and market actors providing infrastructure, networks, services and applications, the mutual dependencies of failure free operations are getting more and more complex. Service Level Agreements (SLAs) between the various actors and users may be used to describe the offerings along with price schemes and promises regarding the delivered quality. However, there is no guarantee for failure free operations whatever efforts and means deployed. A system fails for a number of reasons, but automatic fault handling mechanisms and operational procedures may be used to decrease the probability for service interruptions. The global number of mobile broadband Internet subscriptions surpassed the number of broadband subscriptions over fixed technologies in 2010. The User Equipment (UE) has become a powerful device supporting a number of wireless access technologies and the always best connected opportunities have become a reality. Some services, e.g. health care, smart power grid control, surveillance/monitoring etc. called critical services in this thesis, put high requirements on service dependability. A definition of dependability is the ability to deliver services that can justifiably be trusted. For critical services, the access networks become crucial factors for achieving high dependability. A major challenge in a multi operator, multi technology wireless environment is the mobility of the user that necessitates handovers according to the physical movement. In this thesis it is proposed an approach for how to optimize the dependability for critical services in multi operator, multi technology wireless environment. This approach allows predicting the service availability and continuity at real-time. Predictions of the optimal service availability and continuity are considered crucial for critical services. To increase the dependability for critical services dual homing is proposed where the use of combinations of access points, possibly owned by different operators and using different technologies, are optimized for the specific location and movement of the user. A central part of the thesis is how to ensure the disjointedness of physical and logical resources so important for utilizing the dependability increase potential with dual homing. To address the interdependency issues between physical and logical resources, a study of Operations, Administrations, and Maintenance (OA&M) processes related to the access network of a commercial Global System for Mobile Communications (GSM)/Universal Mobile Telecommunications System (UMTS) operator was performed. The insight obtained by the study provided valuable information of the inter woven dependencies between different actors in the delivery chain of services. Based on the insight gained from the study of OA&M processes a technological neutral information model of physical and logical resources in the access networks is proposed. The model is used for service availability and continuity prediction and to unveil interdependencies between resources for the infrastructure. The model is proposed as an extension of the Media Independent Handover (MIH) framework. A field trial in a commercial network was conducted to verify the feasibility in retrieving the model related information from the operators' Operational Support Systems (OSSs) and to emulate the extension and usage of the MIH framework. In the thesis it is proposed how measurement reports from UE and signaling in networks are used to define virtual cells as part of the proposed extension of the MIH framework. Virtual cells are limited geographical areas where the radio conditions are homogeneous. Virtual cells have radio coverage from a number of access points. A Markovian model is proposed for prediction of the service continuity of a dual homed critical service, where both the infrastructure and radio links are considered. A dependability gain is obtained by choosing a global optimal sequence of access points. Great emphasizes have been on developing computational e cient techniques and near-optimal solutions considered important for being able to predict service continuity at real-time for critical services. The proposed techniques to obtain the global optimal sequence of access points may be used by handover and multi homing mechanisms/protocols for timely handover decisions and access point selections. With the proposed extension of the MIH framework a global optimal sequence of access points providing the highest reliability may be predicted at real-time

    Opportunity costs calculation in agent-based vehicle routing and scheduling

    Get PDF
    In this paper we consider a real-time, dynamic pickup and delivery problem with timewindows where orders should be assigned to one of a set of competing transportation companies. Our approach decomposes the problem into a multi-agent structure where vehicle agents are responsible for the routing and scheduling decisions and the assignment of orders to vehicles is done by using a second-price auction. Therefore the system performance will be heavily dependent on the pricing strategy of the vehicle agents. We propose a pricing strategy for vehicle agents based on dynamic programming where not only the direct cost of a job insertion is taken into account, but also its impact on future opportunities. We also propose a waiting strategy based on the same opportunity valuation. Simulation is used to evaluate the benefit of pricing opportunities compared to simple pricing strategies in different market settings. Numerical results show that the proposed approach provides high quality solutions, in terms of profits, capacity utilization and delivery reliability
    corecore