28,494 research outputs found
Recommended from our members
On the use of testability measures for dependability assessment
Program “testability” is informally, the probability that a program will fail under test if it contains at least one fault. When a dependability assessment has to be derived from the observation of a series of failure free test executions (a common need for software subject to “ultra high reliability” requirements), measures of testability can-in theory-be used to draw inferences on program correctness. We rigorously investigate the concept of testability and its use in dependability assessment, criticizing, and improving on, previously published results. We give a general descriptive model of program execution and testing, on which the different measures of interest can be defined. We propose a more precise definition of program testability than that given by other authors, and discuss how to increase testing effectiveness without impairing program reliability in operation. We then study the mathematics of using testability to estimate, from test results: the probability of program correctness and the probability of failures. To derive the probability of program correctness, we use a Bayesian inference procedure and argue that this is more useful than deriving a classical “confidence level”. We also show that a high testability is not an unconditionally desirable property for a program. In particular, for programs complex enough that they are unlikely to be completely fault free, increasing testability may produce a program which will be less trustworthy, even after successful testin
Testing fluvial erosion models using the transient response of bedrock rivers to tectonic forcing in the Apennines, Italy
The transient response of bedrock rivers to a drop in base level can be used to
discriminate between competing fluvial erosion models. However, some recent studies of
bedrock erosion conclude that transient river long profiles can be approximately
characterized by a transport‐limited erosion model, while other authors suggest that a
detachment‐limited model best explains their field data. The difference is thought to be
due to the relative volume of sediment being fluxed through the fluvial system. Using a
pragmatic approach, we address this debate by testing the ability of end‐member fluvial
erosion models to reproduce the well‐documented evolution of three catchments in the
central Apennines (Italy) which have been perturbed to various extents by an
independently constrained increase in relative uplift rate. The transport‐limited model is
unable to account for the catchments’response to the increase in uplift rate, consistent with
the observed low rates of sediment supply to the channels. Instead, a detachment‐limited
model with a threshold corresponding to the field‐derived median grain size of the
sediment plus a slope‐dependent channel width satisfactorily reproduces the overall
convex long profiles along the studied rivers. Importantly, we find that the prefactor in the
hydraulic scaling relationship is uplift dependent, leading to landscapes responding faster
the higher the uplift rate, consistent with field observations. We conclude that a slope‐
dependent channel width and an entrainment/erosion threshold are necessary ingredients
when modeling landscape evolution or mapping the distribution of fluvial erosion rates in
areas where the rate of sediment supply to channels is low
Quantifying the slip rates, spatial distribution and evolution of active normal faults from geomorphic analysis: Field examples from an oblique-extensional graben, southern Turkey
Quantifying the extent to which geomorphic features can be used to extract tectonic signals is a key challenge in the Earth Sciences. Here we analyse the drainage patterns, geomorphic impact, and long profiles of bedrock rivers that drain across and around normal faults in a regionally significant oblique-extensional graben (Hatay Graben) in southern Turkey that has been mapped geologically, but for which there are poor constraints on the activity, slip rates and Plio-Pleistocene evolution of basin-bounding faults. We show that drainage in the Hatay Graben is strongly asymmetric, and by mapping the distribution of wind gaps, we are able to evaluate how the drainage network has evolved through time. By comparing the presence, size, and distribution of long profile convexities, we demonstrate that the northern margin of the graben is tectonically quiescent, whereas the southern margin is bounded by active faults. Our analysis suggests that rivers crossing these latter faults are undergoing a transient response to ongoing tectonic uplift, and this interpretation is supported by classic signals of transience such as gorge formation and hill slope rejuvenation within the convex reach. Additionally, we show that the height of long profile convexities varies systematically along the strike of the southern margin faults, and we argue that this effect is best explained if fault linkage has led to an increase in slip rate on the faults through time from ∼ 0.1 to 0.45 mm/yr. By measuring the average length of the original fault segments, we estimate the slip rate enhancement along the faults, and thus calculate the range of times for which fault acceleration could have occurred, given geological estimates of fault throw. These values are compared with the times and slip rates required to grow the documented long-profile convexities enabling us to quantify both the present-day slip rate on the fault (0.45 ± 0.05 mm/yr) and the timing of fault acceleration (1.4 ± 0.2 Ma). Our results have substantial implications for predicting earthquake hazard in this densely populated area (calculated potential Mw = 6.0-6.6), enable us to constrain the tectonic evolution of the graben through time, and more widely, demonstrate that geomorphic analysis can be used as an effective tool for estimating fault slip rates over time periods > 106 years, even in the absence of direct geodetic constraints. © 2008 Elsevier B.V. All rights reserved
Modeling the shortening history of a fault tip fold using structural and geomorphic records of deformation
We present a methodology to derive the growth history of a fault tip fold above a basal detachment. Our approach is based on modeling the stratigraphic and geomorphic records of deformation, as well as the finite structure of the fold constrained from seismic profiles. We parameterize the spatial deformation pattern using a simple formulation of the displacement field derived from sandbox experiments. Assuming a stationary spatial pattern of deformation, we simulate the gradual warping and uplift of stratigraphic and geomorphic markers, which provides an estimate of the cumulative amounts of shortening they have recorded. This approach allows modeling of isolated terraces or growth strata. We apply this method to the study of two fault tip folds in the Tien Shan, the Yakeng and Anjihai anticlines, documenting their deformation history over the past 6–7 Myr. We show that the modern shortening rates can be estimated from the width of the fold topography provided that the sedimentation rate is known, yielding respective rates of 2.15 and 1.12 mm/yr across Yakeng and Anjihai, consistent with the deformation recorded by fluvial and alluvial terraces. This study demonstrates that the shortening rates across both folds accelerated significantly since the onset of folding. It also illustrates the usefulness of a simple geometric folding model and highlights the importance of considering local interactions between tectonic deformation, sedimentation, and erosion
Recommended from our members
Modeling software design diversity
Design diversity has been used for many years now as a means of achieving a degree of fault tolerance in software-based systems. Whilst there is clear evidence that the approach can be expected to deliver some increase in reliability compared with a single version, there is not agreement about the extent of this. More importantly, it remains difficult to evaluate exactly how reliable a particular diverse fault-tolerant system is. This difficulty arises because assumptions of independence of failures between different versions have been shown not to be tenable: assessment of the actual level of dependence present is therefore needed, and this is hard. In this tutorial we survey the modelling issues here, with an emphasis upon the impact these have upon the problem of assessing the reliability of fault tolerant systems. The intended audience is one of designers, assessors and project managers with only a basic knowledge of probabilities, as well as reliability experts without detailed knowledge of software, who seek an introduction to the probabilistic issues in decisions about design diversity
Recommended from our members
The effect of testing on reliability of fault-tolerant software
Previous models have investigated the impact upondiversity - and hence upon the reliability of fault-tolerantsoftware built from 'diverse' versions - of the variation in'difficulty' of demands over the demand space. Thesemodels are essentially static, taking a single snapshotview of the system. In this paper we consider ageneralisation in which the individual versions areallowed to evolve - and their reliability to grow - throughdebugging. In particular, we examine the trade-off thatoccurs in testing between, on the one hand, the increasingreliability of individual versions, and on the other handthe possible diminution of diversity
Sources of uncertainties and artefacts in back-projection results
Back-projecting high-frequency (HF) waves is a common procedure for imaging rupture processes of large earthquakes (i.e. M_w > 7.0). However, obtained back-projection (BP) results could suffer from large uncertainties since high-frequency seismic waveforms are strongly affected by factors like source depth, focal mechanisms, and the Earth's 3-D velocity structures. So far, these uncertainties have not been thoroughly investigated. Here, we use synthetic tests to investigate the influencing factors for which scenarios with various source and/or velocity set-ups are designed, using either Tohoku-Oki (Japan), Kaikoura (New Zealand), Java/Wharton Basin (Indonesia) as test areas. For the scenarios, we generate either 1-D or 3-D teleseismic synthetic data, which are then back-projected using a representative BP method, MUltiple SIgnal Classification (MUSIC). We also analyse corresponding real cases to verify the synthetic test results. The Tohoku-Oki scenario shows that depth phases of a point source can be back-projected as artefacts at their bounce points on the earth's surface, with these artefacts located far away from the epicentre if earthquakes occur at large depths, which could significantly contaminate BP images of large intermediate-depth earthquakes. The Kaikoura scenario shows that for complicated earthquakes, composed of multiple subevents with varying focal mechanisms, BP tends to image subevents emanating large amplitude coherent waveforms, while missing subevents whose P nodal directions point to the arrays, leading to discrepancies either between BP images from different arrays, or between BP images and other source models. Using the Java event, we investigate the impact of 3-D source-side velocity structures. The 3-D bathymetry together with a water layer can generate strong and long-lasting coda waves, which are mirrored as artefacts far from the true source location. Finally, we use a Wharton Basin outer-rise event to show that the wavefields generated by 3-D near trench structures contain frequency-dependent coda waves, leading to frequency-dependent BP results. In summary, our analyses indicate that depth phases, focal mechanism variations and 3-D source-side structures can affect various aspects of BP results. Thus, we suggest that target-oriented synthetic tests, for example, synthetic tests for subduction earthquakes using more realistic 3-D source-side velocity structures, should be conducted to understand the uncertainties and artefacts before we interpret detailed BP images to infer earthquake rupture kinematics and dynamics
- …