6,456 research outputs found

    Integrated analysis of error detection and recovery

    Get PDF
    An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms

    Energy managed reporting for wireless sensor networks

    No full text
    In this paper, we propose a technique to extend the network lifetime of a wireless sensor network, whereby each sensor node decides its individual network involvement based on its own energy resources and the information contained in each packet. The information content is ascertained through a system of rules describing prospective events in the sensed environment, and how important such events are. While the packets deemed most important are propagated by all sensor nodes, low importance packets are handled by only the nodes with high energy reserves. Results obtained from simulations depicting a wireless sensor network used to monitor pump temperature in an industrial environment have shown that a considerable increase in the network lifetime and network connectivity can be obtained. The results also show that when coupled with a form of energy harvesting, our technique can enable perpetual network operatio

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    Current challenges for preseismic electromagnetic emissions: shedding light from micro-scale plastic flow, granular packings, phase transitions and self-affinity notion of fracture process

    Get PDF
    Are there credible electromagnetic (EM) EQ precursors? This a question debated in the scientific community and there may be legitimate reasons for the critical views. The negative view concerning the existence of EM precursors is enhanced by features that accompany their observation which are considered as paradox ones, namely, these signals: (i) are not observed at the time of EQs occurrence and during the aftershock period, (ii) are not accompanied by large precursory strain changes, (iii) are not accompanied by simultaneous geodetic or seismological precursors and (v) their traceability is considered problematic. In this work, the detected candidate EM precursors are studied through a shift in thinking towards the basic science findings relative to granular packings, micron-scale plastic flow, interface depinning, fracture size effects, concepts drawn from phase transitions, self-affine notion of fracture and faulting process, universal features of fracture surfaces, recent high quality laboratory studies, theoretical models and numerical simulations. Strict criteria are established for the definition of an emerged EM anomaly as a preseismic one, while, precursory EM features, which have been considered as paradoxes, are explained. A three-stage model for EQ generation by means of preseismic fracture-induced EM emissions is proposed. The claim that the observed EM precursors may permit a real-time and step-by-step monitoring of the EQ generation is tested

    Fuzzy-model-based robust fault detection with stochastic mixed time-delays and successive packet dropouts

    Get PDF
    This is the Post-Print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 IEEEThis paper is concerned with the network-based robust fault detection problem for a class of uncertain discrete-time Takagi–Sugeno fuzzy systems with stochastic mixed time delays and successive packet dropouts. The mixed time delays comprise both the multiple discrete time delays and the infinite distributed delays. A sequence of stochastic variables is introduced to govern the random occurrences of the discrete time delays, distributed time delays, and successive packet dropouts, where all the stochastic variables are mutually independent but obey the Bernoulli distribution. The main purpose of this paper is to design a fuzzy fault detection filter such that the overall fault detection dynamics is exponentially stable in the mean square and, at the same time, the error between the residual signal and the fault signal is made as small as possible. Sufficient conditions are first established via intensive stochastic analysis for the existence of the desired fuzzy fault detection filters, and then, the corresponding solvability conditions for the desired filter gains are established. In addition, the optimal performance index for the addressed robust fuzzy fault detection problem is obtained by solving an auxiliary convex optimization problem. An illustrative example is provided to show the usefulness and effectiveness of the proposed design method.This work was supported in part by the National Natural Science Foundation of China under Grant 61028008, 60825303, 61004067, National 973 Project under Grant 2009CB320600, the Key Laboratory of Integrated Automation for the Process Industry (Northeastern University), Ministry of Education, the Engineering and Physical Sciences Research Council (EPSRC) of the U.K. under Grant GR/S27658/01, the Royal Society of the U.K., the University of Hong Kong under Grant HKU/CRCG/200907176129 and the Alexander von Humboldt Foundation of Germany

    A virtual actuator approach for the secure control of networked LPV systems under pulse-width modulated DoS attacks

    Get PDF
    In this paper, we formulate and analyze the problem of secure control in the context of networked linear parameter varying (LPV) systems. We consider an energy-constrained, pulse-width modulated (PWM) jammer, which corrupts the control communication channel by performing a denial-of-service (DoS) attack. In particular, the malicious attacker is able to erase the data sent to one or more actuators. In order to achieve secure control, we propose a virtual actuator technique under the assumption that the behavior of the attacker has been identified. The main advantage brought by this technique is that the existing components in the control system can be maintained without need of retuning them, since the virtual actuator will perform a reconfiguration of the plant, hiding the attack from the controller point of view. Using Lyapunov-based results that take into account the possible behavior of the attacker, design conditions for calculating the virtual actuators gains are obtained. A numerical example is used to illustrate the proposed secure control strategy.Peer ReviewedPostprint (author's final draft

    Predictability of catastrophic events: material rupture, earthquakes, turbulence, financial crashes and human birth

    Full text link
    We propose that catastrophic events are "outliers" with statistically different properties than the rest of the population and result from mechanisms involving amplifying critical cascades. Applications and the potential for prediction are discussed in relation to the rupture of composite materials, great earthquakes, turbulence and abrupt changes of weather regimes, financial crashes and human parturition (birth).Comment: Latex document of 22 pages including 6 ps figures, in press in PNA
    corecore