194,588 research outputs found
Modeling the Impact of Testing on Diverse Programs
This paper presents a model of diverse programs that assumes there are a common set of potential software faults that are more or less likely to exist in a specific program version. Testing is modeled as a specific ordering of the removal of faults from each program version. Different models of testing are examined where common and diverse test strategies are used for the diverse program versions. Under certain assumptions, theory suggests that a common test strategy could leave the proportion of common faults unchanged, while di-verse test strategies are likely to reduce the proportion of common faults. A re-view of the available empirical evidence gives some support to the assumptions made in the fault-based model. We also consider how the proportion of com-mon faults can be related to the expected reliability improvement
Improved Methodologies in Modeling and Predicting Failure in AASHTO M-180 Guardrail Steel Using Finite Element Analysis - Phase I
Steel guardrail systems have historic and widespread applications throughout the nationâs highways and roadways. However, catastrophic system failure can occur if the guardrail element ruptures, thus allowing an errant vehicle to pass uncontrolled through the system and potentially allow fractured ends to pierce the occupant compartment. To aid in the analysis and design of guardrail systems, further efforts are needed to develop and implement more reliable material failure criteria to predict and model guardrail steel rupture under all vehicle impact loading scenarios within impact simulation finite element method (FEM) software, such as LS-DYNA.
This Phase I study accomplished a number of tasks to aid in this objective. First, historical and state-of-the-art failure criteria with emphasis on stress state dependent failure criteria were reviewed. Next, various failure surface methods that provide estimations on the triaxiality and Lode parameter vs. effective plastic strain at failure were review and analyzed. It was determined that more flexible failure surface fitting methods may provide better estimations, and larger more diverse testing programs are required to estimate the failure surface through all stress states. A failure surface method using a Smoothed, Thin-Plate Spline was also proposed to overcome short comings in existing failure surface estimation methods. Based on the review of the existing failure surfacesâ performance, a steel material testing program was developed, and testing was performed on 21 different specimen configurations that represent a range of stress states. The specimens were prepared using ASTM A572 Grade 50 steel with similar material properties as AASHTO M-180 guardrail steel. Test results and calculated material properties were presented herein. Lastly, a preliminary FEM modeling effort was conducted. Various modeling parameters were examined, including the effects from hourglass controls, mesh-size effects, inertial effects from load rate, and solid vs. shell behavior. Based on this analysis, preliminary models of the testing specimen were developed. Also, a preliminary material model was calibrated and presented herein. Conclusions were made, and recommendations were provided for continuing a Phase II effort.
Advisor: Ronald K. Falle
Toward optimal implementation of cancer prevention and control programs in public health: A study protocol on mis-implementation
Abstract Background Much of the cancer burden in the USA is preventable, through application of existing knowledge. State-level funders and public health practitioners are in ideal positions to affect programs and policies related to cancer control. Mis-implementation refers to ending effective programs and policies prematurely or continuing ineffective ones. Greater attention to mis-implementation should lead to use of effective interventions and more efficient expenditure of resources, which in the long term, will lead to more positive cancer outcomes. Methods This is a three-phase study that takes a comprehensive approach, leading to the elucidation of tactics for addressing mis-implementation. Phase 1: We assess the extent to which mis-implementation is occurring among state cancer control programs in public health. This initial phase will involve a survey of 800 practitioners representing all states. The programs represented will span the full continuum of cancer control, from primary prevention to survivorship. Phase 2: Using data from phase 1 to identify organizations in which mis-implementation is particularly high or low, the team will conduct eight comparative case studies to get a richer understanding of mis-implementation and to understand contextual differences. These case studies will highlight lessons learned about mis-implementation and identify hypothesized drivers. Phase 3: Agent-based modeling will be used to identify dynamic interactions between individual capacity, organizational capacity, use of evidence, funding, and external factors driving mis-implementation. The team will then translate and disseminate findings from phases 1 to 3 to practitioners and practice-related stakeholders to support the reduction of mis-implementation. Discussion This study is innovative and significant because it will (1) be the first to refine and further develop reliable and valid measures of mis-implementation of public health programs; (2) bring together a strong, transdisciplinary team with significant expertise in practice-based research; (3) use agent-based modeling to address cancer control implementation; and (4) use a participatory, evidence-based, stakeholder-driven approach that will identify key leverage points for addressing mis-implementation among state public health programs. This research is expected to provide replicable computational simulation models that can identify leverage points and public health system dynamics to reduce mis-implementation in cancer control and may be of interest to other health areas
Recommended from our members
Misunderstanding Models in Environmental and Public Health Regulation
Computational models are fundamental to environmental regulation, yet their capabilities tend to be misunderstood by policymakers. Rather than rely on models to illuminate dynamic and uncertain relationships in natural settings, policymakers too often use models as âanswer machines.â This fundamental misperception that models can generate decisive facts leads to a perverse negative feedback loop that begins with policymaking itself and radiates into the science of modeling and into regulatory deliberations where participants can exploit the misunderstanding in strategic ways. This paper documents the pervasive misperception of models as truth machines in U.S. regulation and the multi-layered problems that result from this misunderstanding. The paper concludes with a series of proposals for making better use of models in environmental policy analysis.The Kay Bailey Hutchison Center for Energy, Law, and Busines
Recommended from our members
Modeling the effects of combining diverse software fault detection techniques
The software engineering literature contains many studies of the efficacy of fault finding techniques. Few of these, however, consider what happens when several different techniques are used together. We show that the effectiveness of such multitechnique approaches depends upon quite subtle interplay between their individual efficacies and dependence between them. The modelling tool we use to study this problem is closely related to earlier work on software design diversity. The earliest of these results showed that, under quite plausible assumptions, it would be unreasonable even to expect software versions that were developed âtruly independentlyâ to fail independently of one another. The key idea here was a âdifficulty functionâ over the input space. Later work extended these ideas to introduce a notion of âforcedâ diversity, in which it became possible to obtain system failure behaviour better even than could be expected if the versions failed independently. In this paper we show that many of these results for design diversity have counterparts in diverse fault detection in a single software version. We define measures of fault finding effectiveness, and of diversity, and show how these might be used to give guidance for the optimal application of different fault finding procedures to a particular program. We show that the effects upon reliability of repeated applications of a particular fault finding procedure are not statistically independent - in fact such an incorrect assumption of independence will always give results that are too optimistic. For diverse fault finding procedures, on the other hand, things are different: here it is possible for effectiveness to be even greater than it would be under an assumption of statistical independence. We show that diversity of fault finding procedures is, in a precisely defined way, âa good thingâ, and should be applied as widely as possible. The new model and its results are illustrated using some data from an experimental investigation into diverse fault finding on a railway signalling application
- âŠ