149,938 research outputs found

    Predictability of Critical Transitions

    Full text link
    Critical transitions in multistable systems have been discussed as models for a variety of phenomena ranging from the extinctions of species to socio-economic changes and climate transitions between ice-ages and warm-ages. From bifurcation theory we can expect certain critical transitions to be preceded by a decreased recovery from external perturbations. The consequences of this critical slowing down have been observed as an increase in variance and autocorrelation prior to the transition. However especially in the presence of noise it is not clear, whether these changes in observation variables are statistically relevant such that they could be used as indicators for critical transitions. In this contribution we investigate the predictability of critical transitions in conceptual models. We study the quadratic integrate-and-fire model and the van der Pol model, under the influence of external noise. We focus especially on the statistical analysis of the success of predictions and the overall predictability of the system. The performance of different indicator variables turns out to be dependent on the specific model under study and the conditions of accessing it. Furthermore, we study the influence of the magnitude of transitions on the predictive performance

    Design for Time-Predictability

    Get PDF
    A large part of safety-critical embedded systems has to satisfy hard real-time constraints. These need sound methods and tools to derive reliable run-time guarantees. The guaranteed run times should not only be reliable, but also precise. The achievable precision highly depends on characteristics of the target architecture and the implementation methods and system layers of the software. Trends in hardware and software design run contrary to predictability. This article describes threats to time-predictability of systems and proposes design principles that support time predictability. The ultimate goal is to design performant systems with sharp upper and lower bounds on execution times

    Predicting catastrophes: the role of criticality

    Full text link
    Is prediction feasible in systems at criticality? While conventional scale-invariant arguments suggest a negative answer, evidence from simulation of driven-dissipative systems and real systems such as ruptures in material and crashes in the financial market have suggested otherwise. In this dissertation, I address the question of predictability at criticality by investigating two non-equilibrium systems: a driven-dissipative system called the OFC model which is used to describe earthquakes and damage spreading in the Ising model. Both systems display a phase transition at the critical point. By using machine learning, I show that in the OFC model, scaling events are indistinguishable from one another and only the large, non-scaling events are distinguishable from the small, scaling events. I also show that as the critical point is approached, predictability falls. For damage spreading in the Ising model, the opposite behavior is seen: the accuracy of predicting whether damage will spread or heal increases as the critical point is approached. I will also use machine learning to understand what are the useful precursors to the prediction problem

    BB-RTE: a Budget-Based RunTime Engine for Mixed and Safety Critical Systems

    Get PDF
    International audienceThe safety critical industry is considering a shift from single-core COTS to multi-core COTS processor for safety and time critical computers in order to maximize performance while reducing costs.In a domain where time predictability is a major concern due to the regulation standards, multi-core processors are introducing new sources of time variations due to the electronic competition when the software is accessing shared hardware resources, and characterized by timing interference.The solutions proposed in the literature to deal with timing interference are all proposing a trade-off between performance efficiency, time predictability and intrusiveness in the software. Especially, none of them is able to fully exploit the multi-core efficiency while allowing untouched, already-certified legacy software to run.In this paper, we introduce and evaluate BB-RTE, a Budget-Based RunTime Engine for Mixed and Safety Critical Systems, that especially focuses on mixed critical systems. BB-RTE guarantees the deadline of high-critical tasks 1) by computing for each shared hardware resource a budget in terms of extra accesses that the critical tasks can support before their runtime is significantly impacted; 2) by temporarily suspending low-critical tasks at runtime once this budget as been consumed

    Building Responsive Systems from Physically-correct Specifications

    Full text link
    Predictability - the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements - is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems - possessing properties such as clairvoyance, caprice, in finite capacity, or perfect timing - cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems - not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the CLEOPATRA programming language. CLEOPATRA features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. CLEOPATRA is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of CLEOPATRA has been in use as a specification and simulation language for embedded time-critical robotic processes.Harvard University; DARPA (N00039-88-C-0163

    Improving the Effectiveness of Integral Property Calculation in a CSG Solid Modeling System by Exploiting Predictability

    Get PDF
    Integral property calculation is an important application for solid modeling systems. Algorithms for computing integral properties for various solid representation schemes are fairly well known. It is important to deigners and users of solid modeling systems to understand the behavior of such algorithms. Specifically the trade-off between execution time and accuracy is critical to effective use of integral property calculation. The average behavior of two algorithms for Constructive Solid Geometry (CSG) representations is investigated. Experimental results from the PADL-2 solid modeling system show that coarse decompositions can be used to predict execution time and error estimates for finer decompositions. Exploiting this predictability allow effective use of the algorithms in a solid modeling system

    Traffic at the Edge of Chaos

    Full text link
    We use a very simple description of human driving behavior to simulate traffic. The regime of maximum vehicle flow in a closed system shows near-critical behavior, and as a result a sharp decrease of the predictability of travel time. Since Advanced Traffic Management Systems (ATMSs) tend to drive larger parts of the transportation system towards this regime of maximum flow, we argue that in consequence the traffic system as a whole will be driven closer to criticality, thus making predictions much harder. A simulation of a simplified transportation network supports our argument.Comment: Postscript version including most of the figures available from http://studguppy.tsasa.lanl.gov/research_team/. Paper has been published in Brooks RA, Maes P, Artifical Life IV: ..., MIT Press, 199
    • …
    corecore