853,138 research outputs found
Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems
Learning-based control algorithms require data collection with abundant
supervision for training. Safe exploration algorithms ensure the safety of this
data collection process even when only partial knowledge is available. We
present a new approach for optimal motion planning with safe exploration that
integrates chance-constrained stochastic optimal control with dynamics learning
and feedback control. We derive an iterative convex optimization algorithm that
solves an \underline{Info}rmation-cost \underline{S}tochastic
\underline{N}onlinear \underline{O}ptimal \underline{C}ontrol problem
(Info-SNOC). The optimization objective encodes both optimal performance and
exploration for learning, and the safety is incorporated as distributionally
robust chance constraints. The dynamics are predicted from a robust regression
model that is learned from data. The Info-SNOC algorithm is used to compute a
sub-optimal pool of safe motion plans that aid in exploration for learning
unknown residual dynamics under safety constraints. A stable feedback
controller is used to execute the motion plan and collect data for model
learning. We prove the safety of rollout from our exploration method and
reduction in uncertainty over epochs, thereby guaranteeing the consistency of
our learning method. We validate the effectiveness of Info-SNOC by designing
and implementing a pool of safe trajectories for a planar robot. We demonstrate
that our approach has higher success rate in ensuring safety when compared to a
deterministic trajectory optimization approach.Comment: Submitted to RA-L 2020, review-
Robust safety of timed automata
Timed automata are governed by an idealized semantics that assumes a perfectly precise behavior of the clocks. The traditional semantics is not robust because the slightest perturbation in the timing of actions may lead to completely different behaviors of the automaton. Following several recent works, we consider a relaxation of this semantics, in which guards on transitions are widened byΔ>0 and clocks can drift byε>0. The relaxed semantics encompasses the imprecisions that are inevitably present in an implementation of a timed automaton, due to the finite precision of digital clocks. We solve the safety verification problem for this robust semantics: given a timed automaton and a set of bad states, our algorithm decides if there exist positive values for the parametersΔ andε such that the timed automaton never enters the bad states under the relaxed semantic
Integrating model checking with HiP-HOPS in model-based safety analysis
The ability to perform an effective and robust safety analysis on the design of modern safety–critical systems is crucial. Model-based safety analysis (MBSA) has been introduced in recent years to support the assessment of complex system design by focusing on the system model as the central artefact, and by automating the synthesis and analysis of failure-extended models. Model checking and failure logic synthesis and analysis (FLSA) are two prominent MBSA paradigms. Extensive research has placed emphasis on the development of these techniques, but discussion on their integration remains limited. In this paper, we propose a technique in which model checking and Hierarchically Performed Hazard Origin and Propagation Studies (HiP-HOPS) – an advanced FLSA technique – can be applied synergistically with benefit for the MBSA process. The application of the technique is illustrated through an example of a brake-by-wire system
AI Safety and Reproducibility: Establishing Robust Foundations for the Neuropsychology of Human Values
We propose the creation of a systematic effort to identify and replicate key
findings in neuropsychology and allied fields related to understanding human
values. Our aim is to ensure that research underpinning the value alignment
problem of artificial intelligence has been sufficiently validated to play a
role in the design of AI systems.Comment: 5 page
Synthesizing Robust Systems with RATSY
Specifications for reactive systems often consist of environment assumptions
and system guarantees. An implementation should not only be correct, but also
robust in the sense that it behaves reasonably even when the assumptions are
(temporarily) violated. We present an extension of the requirements analysis
and synthesis tool RATSY that is able to synthesize robust systems from GR(1)
specifications, i.e., system in which a finite number of safety assumption
violations is guaranteed to induce only a finite number of safety guarantee
violations. We show how the specification can be turned into a two-pair Streett
game, and how a winning strategy corresponding to a correct and robust
implementation can be computed. Finally, we provide some experimental results.Comment: In Proceedings SYNT 2012, arXiv:1207.055
- …
