1,560 research outputs found

    EVMDD-Based Analysis and Diagnosis Methods of Multi-State Systems with Multi-State Components *

    Get PDF
    A multi-state system with multi-state components is a model of systems, where performance, capacity, or reliability levels of the systems are represented as states. It usually has more than two states, and thus can be considered as a multi-valued function, called a structure function. Since many structure functions are monotone increasing, their multi-state systems can be represented compactly by edge-valued multivalued decision diagrams (EVMDDs). This paper presents an analysis method of multi-state systems with multi-state components using EVMDDs. Experimental results show that, by using EVMDDs, structure functions can be represented more compactly than existing methods using ordinary MDDs. Further, EVMDDs yield comparable computation time for system analysis. This paper also proposes a new diagnosis method using EVMDDs, and shows that the proposed method can infer the most probable causes for system failures more efficiently than conventional methods based on Bayesian networks

    Managing a portfolio of risks

    Get PDF

    Critical Asset and Portfolio Risk Analysis for Homeland Security

    Get PDF
    Providing a defensible basis for allocating resources for critical infrastructure and key resource protection is an important and challenging problem. Investments can be made in countermeasures that improve the security and hardness of a potential target exposed to a security hazard, deterrence measures to decrease the likeliness of a security event, and capabilities to mitigate human, economic, and other types of losses following an incident. Multiple threat types must be considered, spanning everything from natural hazards, industrial accidents, and human-caused security threats. In addition, investment decisions can be made at multiple levels of abstraction and leadership, from tactical decisions for real-time protection of assets to operational and strategic decisions affecting individual assets and assets comprising a regions or sector. The objective of this research is to develop a probabilistic risk analysis methodology for critical asset protection, called Critical Asset and Portfolio Risk Analysis, or CAPRA, that supports operational and strategic resource allocation decisions at any level of leadership or system abstraction. The CAPRA methodology consists of six analysis phases: scenario identification, consequence and severity assessment, overall vulnerability assessment, threat probability assessment, actionable risk assessment, and benefit-cost analysis. The results from the first four phases of CAPRA combine in the fifth phase to produce actionable risk information that informs decision makers on where to focus attention for cost-effective risk reduction. If the risk is determined to be unacceptable and potentially mitigable, the sixth phase offers methods for conducting a probabilistic benefit-cost analysis of alternative risk mitigation strategies. Several case studies are provided to demonstrate the methodology, including an asset-level analysis that leverages systems reliability analysis techniques and a regional-level portfolio analysis that leverages techniques from approximate reasoning. The main achievements of this research are three-fold. First, this research develops methods for security risk analysis that specifically accommodates the dynamic behavior of intelligent adversaries, to include their tendency to shift attention toward attractive targets and to seek opportunities to exploit defender ignorance of plausible targets and attack modes to achieve surprise. Second, this research develops and employs an expanded definition of vulnerability that takes into account all system weaknesses from initiating event to consequence. That is, this research formally extends the meaning of vulnerability beyond security weaknesses to include target fragility, the intrinsic resistance to loss of the systems comprising the asset, and weaknesses in response and recovery capabilities. Third, this research demonstrates that useful actionable risk information can be produced even with limited information supporting precise estimates of model parameters

    Making decisions about screening cargo containers for nuclear threats using decision analysis and optimization

    Get PDF
    One of the most pressing concerns in homeland security is the illegal passing of weapons-grade nuclear material through the borders of the United States. If terrorists can gather the materials needed to construct a nuclear bomb or radiological dispersion device (RDD, i.e., dirty bomb) while inside the United States, the consequences would be devastating. Preventing plutonium, highly enriched uranium (HEU), tritium gas or other materials that can be used to construct a nuclear weapon from illegally entering the United States is an area of vital concern. There are enormous economic consequences when our nation\u27s port security system is compromised. Interdicting nuclear material being smuggled into the United States on cargo containers is an issue of vital national interest, since it is a critical aspect of protecting the United States from nuclear attacks. However, the efforts made to prevent nuclear material from entering the United States via cargo containers have been disjoint, piecemeal, and reactive, not the result of coordinated, systematic planning and analysis. Our economic well-being is intrinsically linked with the success and security of the international trade system. International trade accounts for more than thirty percent of the United States economy (Rooney, 2005). Ninety-five percent of international goods that enter the United States come through one of 361 ports, adding up to more than 11.4 million containers every year (Fritelli, 2005; Rooney, 2005; US DOT, 2007). Port security has emerged as a critically important yet vulnerable component in the homeland security system. Applying game theoretic methods to counterterrorism provides a structured technique for defenders to analyzing the way adversaries will interact under different circumstances and scenarios. This way of thinking is somewhat counterintuitive, but is an extremely useful tool in analyzing potential strategies for defenders. Decision analysis can handle very large and complex problems by integrating multiple perspectives and providing a structured process in evaluating preferences and values from the individuals involved. The process can still ensure that the decision still focuses on achieving the fundamental objectives. In the decision analysis process value tradeoffs are evaluated to review alternatives and attitudes to risk can be quantified to help the decision maker understand what aspects of the problem are not under their control. Most of all decision analysis provides insight that may not have been captured or fully understood if decision analysis was not incorporated into the decision making process. All of these factors make decision analysis essentially to making an informed decision. Game theory and decision analysis both play important roles in counterterrorism efforts. However, they both have their weaknesses. Decision analysis techniques such as probabilistic risk analysis can provide incorrect assessments of risk when modeling intelligent adversaries as uncertain hazards. Game theory analysis also has limitations. For example when analyzing a terrorist or terrorist group using game theory we can only take into consideration one aspect of the problem to optimize at a time. Meaning the analysis is either analyzing the problem from the defenders perspective or from the attacker’s perspective. Parnell et al. (2009) was able to develop a model that simultaneously maximizes the effects of the terrorist and minimizes the consequences for the defender. The question this thesis aims to answer is whether investing in new detector technology for screening cargo containers is a worthwhile investment for protecting our country from a terrorist attack. This thesis introduces an intelligent adversary risk analysis model for determining whether to use new radiological screening technologies at our nation’s ports. This technique provides a more realistic risk assessment of the true situation being modeled and determines whether it is cost effective for our country to invest in new cargo container screening technology. The optimal decision determined by our model is for the United States to invest in a new detector, and for the terrorists to choose agent cobalt-60, shown in Figure 18. This is mainly due to the prominence of false alarms and the high costs associated with screening all of these false alarms, and we assume for every cargo container that sounds an alarm, that container is physically inspected. With the new detector technology the prominence of false alarms decreases and the true alarm rate increases, the cost savings associated with this change in the new technology outweighs the cost of technical success or failure. Since the United States is attempting to minimize their expected cost per container, the optimal choice is to invest in the new detector. Our intelligent adversary risk analysis model can simultaneously determine the best decision for the United States, who is trying to minimize the expected cost, and the terrorist, who is trying to maximize the expected cost to the United States. Simultaneously modeling the decisions of the defender and attacker provides a more accurate picture of reality and could provide important insights to the real situation that may have been missed with other techniques. The model is extremely sensitive to certain inputs and parameters, even though the values are in line with what is available in the literature, it is important to understand the sensitivities. Two inputs that were found to be particularly important are the expected cost for physically inspecting a cargo container, and the cost of implementing the technology needed for the new screening device. Using this model the decision maker can construct more accurate judgments based on the true situation. This increase in accuracy could save lives with the decisions being made. The model can also help the decision maker understand the interdependencies of the model and visually see how his resource allocations affect the optimal decisions of the defender and the attacker

    Bayesian Networks with Expert Elicitation as Applicable to Student Retention in Institutional Research

    Get PDF
    The application of Bayesian networks within the field of institutional research is explored through the development of a Bayesian network used to predict first- to second-year retention of undergraduates. A hybrid approach to model development is employed, in which formal elicitation of subject-matter expertise is combined with machine learning in designing model structure and specification of model parameters. Subject-matter experts include two academic advisors at a small, private liberal arts college in the southeast, and the data used in machine learning include six years of historical student-related information (i.e., demographic, admissions, academic, and financial) on 1,438 first-year students. Netica 5.12, a software package designed for constructing Bayesian networks, is used for building and validating the model. Evaluation of the resulting model’s predictive capabilities is examined, as well as analyses of sensitivity, internal validity, and model complexity. Additionally, the utility of using Bayesian networks within institutional research and higher education is discussed. The importance of comprehensive evaluation is highlighted, due to the study’s inclusion of an unbalanced data set. Best practices and experiences with expert elicitation are also noted, including recommendations for use of formal elicitation frameworks and careful consideration of operating definitions. Academic preparation and financial need risk profile are identified as key variables related to retention, and the need for enhanced data collection surrounding such variables is also revealed. For example, the experts emphasize study skills as an important predictor of retention while noting the absence of collection of quantitative data related to measuring students’ study skills. Finally, the importance and value of the model development process is stressed, as stakeholders are required to articulate, define, discuss, and evaluate model components, assumptions, and results

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Why and How to Extract Conditional Statements From Natural Language Requirements

    Get PDF
    Functional requirements often describe system behavior by relating events to each other, e.g. "If the system detects an error (e_1), an error message shall be shown (e_2)". Such conditionals consist of two parts: the antecedent (see e_1) and the consequent (e_2), which convey strong, semantic information about the intended behavior of a system. Automatically extracting conditionals from texts enables several analytical disciplines and is already used for information retrieval and question answering. We found that automated conditional extraction can also provide added value to Requirements Engineering (RE) by facilitating the automatic derivation of acceptance tests from requirements. However, the potential of extracting conditionals has not yet been leveraged for RE. We are convinced that this has two principal reasons: 1) The extent, form, and complexity of conditional statements in RE artifacts is not well understood. We do not know how conditionals are formulated and logically interpreted by RE practitioners. This hinders the development of suitable approaches for extracting conditionals from RE artifacts. 2) Existing methods fail to extract conditionals from Unrestricted Natural Language (NL) in fine-grained form. That is, they do not consider the combinatorics between antecedents and consequents. They also do not allow to split them into more fine-granular text fragments (e.g., variable and condition), rendering the extracted conditionals unsuitable for RE downstream tasks such as test case derivation. This thesis contributes to both areas. In Part I, we present empirical results on the prevalence and logical interpretation of conditionals in RE artifacts. Our case study corroborates that conditionals are widely used in both traditional and agile requirements such as acceptance criteria. We found that conditionals in requirements mainly occur in explicit, marked form and may include up to three antecedents and two consequents. Hence, the extraction approach needs to understand conjunctions, disjunctions, and negations to fully capture the relation between antecedents and consequents. We also found that conditionals are a source of ambiguity and there is not just one way to interpret them formally. This affects any automated analysis that builds upon formalized requirements (e.g., inconsistency checking) and may also influence guidelines for writing requirements. Part II presents our tool-supported approach CiRA capable of detecting conditionals in NL requirements and extracting them in fine-grained form. For the detection, CiRA uses syntactically enriched BERT embeddings combined with a softmax classifier and outperforms existing methods (macro-F_1: 82%). Our experiments show that a sigmoid classifier built on RoBERTa embeddings is best suited to extract conditionals in fine-grained form (macro-F_1: 86%). We disclose our code, data sets, and trained models to facilitate replication. CiRA is available at http://www.cira.bth.se/demo/. In Part III, we highlight how the extraction of conditionals from requirements can help to create acceptance tests automatically. First, we motivate this use case in an empirical study and demonstrate that the lack of adequate acceptance tests is one of the major problems in agile testing. Second, we show how extracted conditionals can be mapped to a Cause-Effect-Graph from which test cases can be derived automatically. We demonstrate the feasibility of our approach in a case study with three industry partners. In our study, out of 578 manually created test cases, 71.8% can be generated automatically. Furthermore, our approach discovered 80 relevant test cases that were missed in manual test case design. At the end of this thesis, the reader will have an understanding of (1) the notion of conditionals in RE artifacts, (2) how to extract them in fine-grained form, and (3) the added value that the extraction of conditionals can provide to RE

    Decision support for choice of security solution: the Aspect-Oriented Risk Driven Development (AORDD)framework

    Get PDF
    In security assessment and management there is no single correct solution to the identified security problems or challenges. Instead there are only choices and tradeoffs. The main reason for this is that modern information systems and security critical information systems in particular must perform at the contracted or expected security level, make effective use of available resources and meet end-users' expectations. Balancing these needs while also fulfilling development, project and financial perspectives, such as budget and TTM constraints, mean that decision makers have to evaluate alternative security solutions.\ud \ud This work describes parts of an approach that supports decision makers in choosing one or a set of security solutions among alternatives. The approach is called the Aspect-Oriented Risk Driven Development (AORDD) framework, combines Aspect-Oriented Modeling (AOM) and Risk Driven Development (RDD) techniques and consists of the seven components: (1) An iterative AORDD process. (2) Security solution aspect repository. (3) Estimation repository to store experience from estimation of security risks and security solution variables involved in security solution decisions. (4) RDD annotation rules for security risk and security solution variable estimation. (5) The AORDD security solution trade-off analysis and trade-o¤ tool BBN topology. (6) Rule set for how to transfer RDD information from the annotated UML diagrams into the trad-off tool BBN topology. (7) Trust-based information aggregation schema to aggregate disparate information in the trade-o¤ tool BBN topology. This work focuses on components 5 and 7, which are the two core components in the AORDD framework
    corecore