459 research outputs found
Scalable Synthesis and Verification: Towards Reliable Autonomy
We have seen the growing deployment of autonomous systems in our daily life, ranging from safety-critical self-driving cars to dialogue agents. While impactful and impressive, these systems do not often come with guarantees and are not rigorously evaluated for failure cases. This is in part due to the limited scalability of tools available for designing correct-by-construction systems, or verifying them posthoc. Another key limitation is the lack of availability of models for the complex environments with which autonomous systems often have to interact with. In the direction of overcoming these above mentioned bottlenecks to designing reliable autonomous systems, this thesis makes contributions along three fronts.
First, we develop an approach for parallelized synthesis from linear-time temporal logic Specifications corresponding to the generalized reactivity (1) fragment. We begin by identifying a special case corresponding to singleton liveness goals that allows for a decomposition of the synthesis problem, which facilitates parallelized synthesis. Based on the intuition from this special case, we propose a more generalized approach for parallelized synthesis that relies on identifying equicontrollable states.
Second, we consider learning-based approaches to enable verification at scale for complex systems, and for autonomous systems that interact with black-box environments. For the former, we propose a new abstraction refinement procedure based on machine learning to improve the performance of nonlinear constraint solving algorithms on large-scale problems. For the latter, we present a data-driven approach based on chance-constrained optimization that allows for a system to be evaluated for specification conformance without an accurate model of the environment. We demonstrate this approach on several tasks, including a lane-change scenario with real-world driving data.
Lastly, we consider the problem of interpreting and verifying learning-based components such as neural networks. We introduce a new method based on Craig's interpolants for computing compact symbolic abstractions of pre-images for neural networks. Our approach relies on iteratively computing approximations that provably overapproximate and underapproximate the pre-images at all layers. Further, building on existing work for training neural networks for verifiability in the classification setting, we propose extensions that allow us to generalize the approach to more general architectures and temporal specifications.</p
Partial Information Decomposition as a Unified Approach to the Specification of Neural Goal Functions
In many neural systems anatomical motifs are present repeatedly, but despite
their structural similarity they can serve very different tasks. A prime
example for such a motif is the canonical microcircuit of six-layered
neo-cortex, which is repeated across cortical areas, and is involved in a
number of different tasks (e.g.sensory, cognitive, or motor tasks). This
observation has spawned interest in finding a common underlying principle, a
'goal function', of information processing implemented in this structure. By
definition such a goal function, if universal, cannot be cast in
processing-domain specific language (e.g. 'edge filtering', 'working memory').
Thus, to formulate such a principle, we have to use a domain-independent
framework. Information theory offers such a framework. However, while the
classical framework of information theory focuses on the relation between one
input and one output (Shannon's mutual information), we argue that neural
information processing crucially depends on the combination of
\textit{multiple} inputs to create the output of a processor. To account for
this, we use a very recent extension of Shannon Information theory, called
partial information decomposition (PID). PID allows to quantify the information
that several inputs provide individually (unique information), redundantly
(shared information) or only jointly (synergistic information) about the
output. First, we review the framework of PID. Then we apply it to reevaluate
and analyze several earlier proposals of information theoretic neural goal
functions (predictive coding, infomax, coherent infomax, efficient coding). We
find that PID allows to compare these goal functions in a common framework, and
also provides a versatile approach to design new goal functions from first
principles. Building on this, we design and analyze a novel goal function,
called 'coding with synergy'. [...]Comment: 21 pages, 4 figures, appendi
Transactive Control of Coupled Electric Power and District Heating Networks
The aim to decarbonize the energy supply represents a major technical and social challenge. The design of approaches for future energy network operation faces the technical challenge of needing to coordinate a vast number of new network participants spatially and temporally, in order to balance energy supply and demand, while achieving secure network operation. At the same time these approaches should ideally provide economic optimal solutions. In order to meet this challenge, the research field of transactive control emerged, which is based on an appropriate interaction of market and control mechanisms. These approaches have been extensively studied for electric power networks. In order to account for the strong differences between the operation of electric power networks and other energy networks, new approaches need to be developed. Therefore, within this work a new transactive control approach for Coupled Electric Power and District Heating Networks (CEPDHNs) is presented. As this is built upon a model-based control approach, a suitable model is designed first, which enables to operate coupled electric power and district heating networks as efficient as possible. Also, for the transactive control approach a new fitted procedure is developed to determine market clearing prices in the multi-energy system. Further, a distributed form of district heating network operation is designed in this context. The effectiveness of the presented approach is analyzed in multiple simulations, based on real world networks
Sampling-Based Methods for Factored Task and Motion Planning
This paper presents a general-purpose formulation of a large class of
discrete-time planning problems, with hybrid state and control-spaces, as
factored transition systems. Factoring allows state transitions to be described
as the intersection of several constraints each affecting a subset of the state
and control variables. Robotic manipulation problems with many movable objects
involve constraints that only affect several variables at a time and therefore
exhibit large amounts of factoring. We develop a theoretical framework for
solving factored transition systems with sampling-based algorithms. The
framework characterizes conditions on the submanifold in which solutions lie,
leading to a characterization of robust feasibility that incorporates
dimensionality-reducing constraints. It then connects those conditions to
corresponding conditional samplers that can be composed to produce values on
this submanifold. We present two domain-independent, probabilistically complete
planning algorithms that take, as input, a set of conditional samplers. We
demonstrate the empirical efficiency of these algorithms on a set of
challenging task and motion planning problems involving picking, placing, and
pushing
Scalable Design Space Exploration via Answer Set Programming
The design of embedded systems is becoming continuously more complex such that the application of efficient high level design methods are crucial for competitive results regarding design time and performance. Recently, advances in Boolean constraint solvers for Answer Set Programming (ASP) allow for easy integration of background theories and more control over the solving process. The goal of this research is to leverage those advances for system level design space exploration while using specialized techniques from electronic design automation that drive new application-originated ideas for multi-objective combinatorial optimization
Modelling a multichannel security protocol to address Man in the Middle attacks
Unlike wired networks, wireless networks cannot be physically protected, mak
ing them greatly at risk. This study looks into advanced ways of implementing
security techniques in wireless networks. It proposes using model checking and the
orem proving to prove and validate a security protocol of data transmission over
multi-channel in Wireless Local Area Networks (WLANs) between two sources.
This can help to reduce the risk of wireless networks being vulnerable to Man
in the Middle (MitM) attacks. We model secure transmission over a two-host
two-channel wireless network and consider the transmission in the presence of a
MitM attack. The main goal of adding an extra channel to the main channel is
to provide security by stopping MitM from getting any readable data once one of
these channels has been attacked.
We analyse the model for vulnerabilities and specify assertions for secure data
transmission over a multi-channel WLAN. Our approach uses the model analyser
Alloy which uses a Satisfiability (SAT) solver to find a model of a Boolean formula.
Alloy characterizations of security models are written to analyse and verify that the
implementation of a system is correct and achieves security relative to assertions
about the model of our security protocol. Further, we use the Z3 theorem prover
to check satisfiability using the Satisfiability Modulo Theories (SMT) solver to
generate results. Using Z3 does not involve high costs and can help with providing
reliable results that are accurate and practical for complex designs.
We conclude that, based on the results we achieved from analysing our pro
tocol using Alloy and Z3 SMT solver, the solvers complement each other in their
strengths and weaknesses. The common weakness is that neither can tell us why
the model is inconsistent, if it is inconsistent. We suggest that an approach of be
ginning with modelling a problem using Alloy and then turning to prove it using
Z3, increases overall confidence in a model
Partial information decomposition as a unified approach to the specification of neural goal functions
In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a ‘goal function’, of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. ‘edge filtering’, ‘working memory’). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon’s mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called ‘coding with synergy’, which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing
- …