608,294 research outputs found

    Probabilistic prediction of rupture length, slip and seismic ground motions for an ongoing rupture: implications for early warning for large earthquakes

    Get PDF
    Earthquake EarlyWarning (EEW) predicts future ground shaking based on presently available data. Long ruptures present the best opportunities for EEW since many heavily shaken areas are distant from the earthquake epicentre and may receive long warning times. Predicting the shaking from large earthquakes, however, requires some estimate of the likelihood of the future evolution of an ongoing rupture. An EEW system that anticipates future rupture using the present magnitude (or rupture length) together with the Gutenberg-Richter frequencysize statistics will likely never predict a large earthquake, because of the rare occurrence of ‘extreme events’. However, it seems reasonable to assume that large slip amplitudes increase the probability for evolving into a large earthquake. To investigate the relationship between the slip and the eventual size of an ongoing rupture, we simulate suites of 1-D rupture series from stochastic models of spatially heterogeneous slip. We find that while large slip amplitudes increase the probability for the continuation of a rupture and the possible evolution into a ‘Big One’, the recognition that rupture is occurring on a spatially smooth fault has an even stronger effect.We conclude that anEEWsystem for large earthquakes needs some mechanism for the rapid recognition of the causative fault (e.g., from real-time GPS measurements) and consideration of its ‘smoothness’. An EEW system for large earthquakes on smooth faults, such as the San Andreas Fault, could be implemented in two ways: the system could issue a warning, whenever slip on the fault exceeds a few metres, because the probability for a large earthquake is high and strong shaking is expected to occur in large areas around the fault. A more sophisticated EEW system could use the present slip on the fault to estimate the future slip evolution and final rupture dimensions, and (using this information) could provide probabilistic predictions of seismic ground motions along the evolving rupture. The decision on whether an EEW system should be realized in the first or in the second way (or in a combination of both) is user-specific

    Implications of fault current limitation for electrical distribution networks

    Get PDF
    This paper explores the potential future need for fault current limitation in the UK's power system, and some of the technical implications of this change. It is estimated that approximately 300-400 distribution substations will require fault current limitation, based on the statistical analysis of the projected fault level "headroom" (or violation). The analysis uses a UK electrical system scenario that satisfies the Government's target for an 80% cut in CO2 emissions by 2050. A case study involving the connection of distributed generation (DG) via a superconducting fault current limiter (SFCL) is used to illustrate the potential protection and control issues. In particular, DG fault ride-through, autoreclosure schemes, and transformer inrush current can be problematic for SFCLs that require a recovery period. The potential solutions to these issues are discussed, such as the use of islanding or automation to reduce the fault level

    Model-based sensor supervision inland navigation networks: Cuinchy-Fontinettes case study

    Get PDF
    In recent years, inland navigation networks benefit from the innovation of the instrumentation and SCADA systems. These data acquisition and control systems lead to the improvement of the manage- ment of these networks. Moreover, they allow the implementation of more accurate automatic control to guarantee the navigation requirements. However, sensors and actuators are subject to faults due to the strong effects of the environment, aging, etc. Thus, before implementing automatic control strate- gies that rely on the fault-free mode, it is necessary to design a fault diagnosis scheme. This fault diagnosis scheme has to detect and isolate possible faults in the system to guarantee fault-free data and the efficiency of the automatic control algorithms. Moreover, the proposed supervision scheme could predict future incipient faults that are necessary to perform predictive maintenance of the equipment. In this paper, a general architecture of sensor fault detection and isolation using model-based approaches will be proposed for inland navigation networks. The proposed approach will be particularized for the Cuinchy-Fontinettes reach located in the north of France. The preliminary results show the effectiveness of the proposed fault diagnosis methodologies using a realistic simulator and fault scenarios.In recent years, inland navigation networks beneÂżt from the innovation of the instrumentation and SCADA systems. These data acquisition and control systems lead to the improvement of the management of these networks. Moreover, they allow the implementation of more accurate automatic control to guarantee the navigation requirements. However, sensors and actuators are subject to faults due to the strong effects of the environment, aging, etc. Thus, before implementing automatic control strategies that rely on the fault-free mode, it is necessary to design a fault diagnosis scheme. This fault diagnosis scheme has to detect and isolate possible faults in the system to guarantee fault-free data and the efficiency of the automatic control algorithms. Moreover, the proposed supervision scheme could predict future incipient faults that are necessary to perform predictive maintenance of the equipment. In this paper, a general architecture of sensor fault detection and isolation using model-based approaches will be proposed for inland navigation networks. The proposed approach will be particularized for the Cuinchy-Fontinettes reach located in the north of France. The preliminary results show the effectiveness of the proposed fault diagnosis methodologies using a realistic simulator and fault scenarios.Peer ReviewedPostprint (author's final draft

    A way to synchronize models with seismic faults for earthquake forecasting: Insights from a simple stochastic model

    Full text link
    Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual fault or fault network it simulates (just as, for example, meteorologists synchronize their models with the atmosphere by incorporating current atmospheric data in them). However, lithospheric dynamics is largely unobservable: important parameters cannot (or can rarely) be measured in Nature. Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models. The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized with the first one. We use these partially synchronized models to successfully forecast most of the largest earthquakes generated by the first model. This forecasting strategy outperforms others that only take into account the earthquake series. Our results suggest that probably a good way to synchronize more detailed models with real faults is to force them to reproduce the sequence of previous earthquake ruptures on the faults. This hypothesis could be tested in the future with more detailed models and actual seismic data.Comment: Revised version. Recommended for publication in Tectonophysic

    Experimental set-up for investigation of fault diagnosis of a centrifugal pump

    Get PDF
    Centrifugal pumps are complex machines which can experience different types of fault. Condition monitoring can be used in centrifugal pump fault detection through vibration analysis for mechanical and hydraulic forces. Vibration analysis methods have the potential to be combined with artificial intelligence systems where an automatic diagnostic method can be approached. An automatic fault diagnosis approach could be a good option to minimize human error and to provide a precise machine fault classification. This work aims to introduce an approach to centrifugal pump fault diagnosis based on artificial intelligence and genetic algorithm systems. An overview of the future works, research methodology and proposed experimental setup is presented and discussed. The expected results and outcomes based on the experimental work are illustrated

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    Analysis of energy dissipation in resistive superconducting fault-current limiters for optimal power system performance

    Get PDF
    Fault levels in electrical distribution systems are rising due to the increasing presence of distributed generation, and this rising trend is expected to continue in the future. Superconducting fault-current limiters (SFCLs) are a promising solution to this problem. This paper describes the factors that govern the selection of optimal SFCL resistance. The total energy dissipated in an SFCL during a fault is particularly important for estimating the recovery time of the SFCL; the recovery time affects the design, planning, and operation of electrical systems using SFCLs to manage fault levels. Generic equations for energy dissipation are established in terms of fault duration, SFCL resistance, source impedance, source voltage, and fault inception angles. Furthermore, using an analysis that is independent of superconductor material, it is shown that the minimum required volume of superconductors linearly varies with SFCL resistance but, for a given level of fault-current limitation and power rating, is independent of system voltage and superconductor resistivity. Hence, there is a compromise between a shorter recovery time, which is desirable, and the cost of the volume of superconducting material needed for the resistance required to achieve the shorter recovery time

    Implementing fault tolerant applications using reflective object-oriented programming

    Get PDF
    Abstract: Shows how reflection and object-oriented programming can be used to ease the implementation of classical fault tolerance mechanisms in distributed applications. When the underlying runtime system does not provide fault tolerance transparently, classical approaches to implementing fault tolerance mechanisms often imply mixing functional programming with non-functional programming (e.g. error processing mechanisms). The use of reflection improves the transparency of fault tolerance mechanisms to the programmer and more generally provides a clearer separation between functional and non-functional programming. The implementations of some classical replication techniques using a reflective approach are presented in detail and illustrated by several examples, which have been prototyped on a network of Unix workstations. Lessons learnt from our experiments are drawn and future work is discussed
    • 

    corecore