88 research outputs found
Verifiably-safe software-defined networks for CPS
Next generation cyber-physical systems (CPS) are expected to be deployed in domains which require scalability as well as performance under dynamic conditions. This scale and dynamicity will require that CPS communication networks be programmatic (i.e., not requiring manual intervention at any stage), but still maintain iron-clad safety guarantees. Software-defined networking standards like OpenFlow provide a means for scalably building tailor-made network architectures, but there is no guarantee that these systems are safe, correct, or secure. In this work we propose a methodology and accompanying tools for specifying and modeling distributed systems such that existing formal verification techniques can be transparently used to analyze critical requirements and properties prior to system implementation. We demonstrate this methodology by iteratively modeling and verifying an OpenFlow learning switch network with respect to network correctness, network convergence, and mobility-related properties. We posit that a design strategy based on the complementary pairing of software-defined networking and formal verification would enable the CPS community to build next-generation systems without sacrificing the safety and reliability that these systems must deliver
A Domain-Specific Language for Incremental and Modular Design of Large-Scale Verifiably-Safe Flow Networks (Preliminary Report)
We define a domain-specific language (DSL) to inductively assemble flow
networks from small networks or modules to produce arbitrarily large ones, with
interchangeable functionally-equivalent parts. Our small networks or modules
are "small" only as the building blocks in this inductive definition (there is
no limit on their size). Associated with our DSL is a type theory, a system of
formal annotations to express desirable properties of flow networks together
with rules that enforce them as invariants across their interfaces, i.e, the
rules guarantee the properties are preserved as we build larger networks from
smaller ones. A prerequisite for a type theory is a formal semantics, i.e, a
rigorous definition of the entities that qualify as feasible flows through the
networks, possibly restricted to satisfy additional efficiency or safety
requirements. This can be carried out in one of two ways, as a denotational
semantics or as an operational (or reduction) semantics; we choose the first in
preference to the second, partly to avoid exponential-growth rewriting in the
operational approach. We set up a typing system and prove its soundness for our
DSL.Comment: In Proceedings DSL 2011, arXiv:1109.032
NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems
This paper presents the Neural Network Verification (NNV) software tool, a
set-based verification framework for deep neural networks (DNNs) and
learning-enabled cyber-physical systems (CPS). The crux of NNV is a collection
of reachability algorithms that make use of a variety of set representations,
such as polyhedra, star sets, zonotopes, and abstract-domain representations.
NNV supports both exact (sound and complete) and over-approximate (sound)
reachability algorithms for verifying safety and robustness properties of
feed-forward neural networks (FFNNs) with various activation functions. For
learning-enabled CPS, such as closed-loop control systems incorporating neural
networks, NNV provides exact and over-approximate reachability analysis schemes
for linear plant models and FFNN controllers with piecewise-linear activation
functions, such as ReLUs. For similar neural network control systems (NNCS)
that instead have nonlinear plant models, NNV supports over-approximate
analysis by combining the star set analysis used for FFNN controllers with
zonotope-based analysis for nonlinear plant dynamics building on CORA. We
evaluate NNV using two real-world case studies: the first is safety
verification of ACAS Xu networks and the second deals with the safety
verification of a deep learning-based adaptive cruise control system
Future Vision of Dynamic Certification Schemes for Autonomous Systems
As software becomes increasingly pervasive in critical domains like
autonomous driving, new challenges arise, necessitating rethinking of system
engineering approaches. The gradual takeover of all critical driving functions
by autonomous driving adds to the complexity of certifying these systems.
Namely, certification procedures do not fully keep pace with the dynamism and
unpredictability of future autonomous systems, and they may not fully guarantee
compliance with the requirements imposed on these systems.
In this paper, we have identified several issues with the current
certification strategies that could pose serious safety risks. As an example,
we highlight the inadequate reflection of software changes in constantly
evolving systems and the lack of support for systems' cooperation necessary for
managing coordinated movements. Other shortcomings include the narrow focus of
awarded certification, neglecting aspects such as the ethical behavior of
autonomous software systems. The contribution of this paper is threefold.
First, we analyze the existing international standards used in certification
processes in relation to the requirements derived from dynamic software
ecosystems and autonomous systems themselves, and identify their shortcomings.
Second, we outline six suggestions for rethinking certification to foster
comprehensive solutions to the identified problems. Third, a conceptual
Multi-Layer Trust Governance Framework is introduced to establish a robust
governance structure for autonomous ecosystems and associated processes,
including envisioned future certification schemes. The framework comprises
three layers, which together support safe and ethical operation of autonomous
systems
A domain specific language for incremental and modular design of large-scale verifiably-safe flow networks (preliminary report)
We define a domain-specific language (DSL) to inductively assemble flow networks from small networks or modules to produce arbitrarily large ones, with interchangeable functionally-equivalent parts. Our small networks or modules are "small" only as the building blocks in this inductive definition (there is no limit on their size). Associated with our DSL is a type theory, a system of formal annotations to express desirable properties of flow networks together with rules that enforce them as invariants across their interfaces, i.e, the rules guarantee the properties are preserved as we build larger networks from smaller ones. A prerequisite for a type theory is a formal semantics, i.e., a rigorous definition of the entities that qualify as feasible flows through the networks, possibly restricted to satisfy additional efficiency or safety requirements. This can be carried out in one of two ways, as a denotational semantics or as an operational (or reduction) semantics; we choose the first in preference to the second, partly to avoid exponential-growth rewriting in the operational approach. We set up a typing system and prove its soundness for our DSL
Neural Simplex Architecture
We present the Neural Simplex Architecture (NSA), a new approach to runtime
assurance that provides safety guarantees for neural controllers (obtained e.g.
using reinforcement learning) of autonomous and other complex systems without
unduly sacrificing performance. NSA is inspired by the Simplex control
architecture of Sha et al., but with some significant differences. In the
traditional approach, the advanced controller (AC) is treated as a black box;
when the decision module switches control to the baseline controller (BC), the
BC remains in control forever. There is relatively little work on switching
control back to the AC, and there are no techniques for correcting the AC's
behavior after it generates a potentially unsafe control input that causes a
failover to the BC. Our NSA addresses both of these limitations. NSA not only
provides safety assurances in the presence of a possibly unsafe neural
controller, but can also improve the safety of such a controller in an online
setting via retraining, without overly degrading its performance. To
demonstrate NSA's benefits, we have conducted several significant case studies
in the continuous control domain. These include a target-seeking ground rover
navigating an obstacle field, and a neural controller for an artificial
pancreas system.Comment: 12th NASA Formal Methods Symposium (NFM 2020
Reliable Industrial IoT-Based Distributed Automation
Reconfigurable manufacturing systems supported by Industrial Internet-of-Things (IIoT) are modular and easily integrable, promoting efficient system/component reconfigurations with minimal downtime. Industrial systems are commonly based on sequential controllers described with Control Interpreted Petri Nets (CIPNs). Existing design methodologies to distribute centralized automation/control tasks focus on maintaining functional properties of the system during the process, while disregarding failures that may occur during execution (e. g., communication packet drops, sensing or actuation failures). Consequently, in this work, we provide a missing link for reliable IIoT-based distributed automation. We introduce a method to transform distributed control models based on CIPNs into Stochastic Reward Nets that enable integration of realistic fault models (e. g., probabilistic link models). We show how to specify desired system properties to enable verification under the adopted communication/fault models, both at design-and run-time; we also show feasibility of runtime verification on the edge, with a continuously updated system model. Our approach is used on real industrial systems, resulting in modifications of local controllers to guarantee reliable system operation in realistic IIoT environments
- …