3,915 research outputs found
HySIA: Tool for Simulating and Monitoring Hybrid Automata Based on Interval Analysis
We present HySIA: a reliable runtime verification tool for nonlinear hybrid
automata (HA) and signal temporal logic (STL) properties. HySIA simulates an HA
with interval analysis techniques so that a trajectory is enclosed sharply
within a set of intervals. Then, HySIA computes whether the simulated
trajectory satisfies a given STL property; the computation is performed again
with interval analysis to achieve reliability. Simulation and verification
using HySIA are demonstrated through several example HA and STL formulas.Comment: Appeared in RV'17; the final publication is available at Springe
Statistical Model Checking of e-Motions Domain-Specific Modeling Languages
Domain experts may use novel tools that allow them to de- sign and model their systems in a notation very close to the domain problem. However, the use of tools for the statistical analysis of stochas- tic systems requires software engineers to carefully specify such systems in low level and specific languages. In this work we line up both sce- narios, specific domain modeling and statistical analysis. Specifically, we have extended the e-Motions system, a framework to develop real-time domain-specific languages where the behavior is specified in a natural way by in-place transformation rules, to support the statistical analysis of systems defined using it. We discuss how restricted e-Motions sys- tems are used to produce Maude corresponding specifications, using a model transformation from e-Motions to Maude, which comply with the restrictions of the VeStA tool, and which can therefore be used to per- form statistical analysis on the stochastic systems thus generated. We illustrate our approach with a very simple messaging distributed system.Universidad de Málaga Campus de Excelencia Internacional AndalucĂa Tech. Research Project TIN2014-52034-R an
Event Stream Processing with Multiple Threads
Current runtime verification tools seldom make use of multi-threading to
speed up the evaluation of a property on a large event trace. In this paper, we
present an extension to the BeepBeep 3 event stream engine that allows the use
of multiple threads during the evaluation of a query. Various parallelization
strategies are presented and described on simple examples. The implementation
of these strategies is then evaluated empirically on a sample of problems.
Compared to the previous, single-threaded version of the BeepBeep engine, the
allocation of just a few threads to specific portions of a query provides
dramatic improvement in terms of running time
Uncovering Bugs in Distributed Storage Systems during Testing (not in Production!)
Testing distributed systems is challenging due to multiple sources of nondeterminism. Conventional testing techniques, such as unit, integration and stress testing, are ineffective in preventing serious but subtle bugs from reaching production. Formal techniques, such as TLA+, can only verify high-level specifications of systems at the level of logic-based models, and fall short of checking the actual executable code. In this paper, we present a new methodology for testing distributed systems. Our approach applies advanced systematic testing techniques to thoroughly check that the executable code adheres to its high-level specifications, which significantly improves coverage of important system behaviors. Our methodology has been applied to three distributed storage systems in the Microsoft Azure cloud computing platform. In the process, numerous bugs were identified, reproduced, confirmed and fixed. These bugs required a subtle combination of concurrency and failures, making them extremely difficult to find with conventional testing techniques. An important advantage of our approach is that a bug is uncovered in a small setting and witnessed by a full system trace, which dramatically increases the productivity of debugging
MultiVeStA: Statistical Model Checking for Discrete Event Simulators
The modeling, analysis and performance evaluation of large-scale systems are difficult tasks. Due to the size and complexity of the considered systems, an approach typically followed by engineers consists in performing simulations of systems models to obtain statistical estimations of quantitative properties. Similarly, a technique used by computer scientists working on quantitative analysis is Statistical Model Checking (SMC), where rigorous mathematical languages (typically logics) are used to express systems properties of interest. Such properties can then be automatically estimated by tools performing simulations of the model at hand. These property specifications languages, often not popular among engineers, provide a formal, compact and elegant way to express systems properties without needing to hard-code them in the model definition. This paper presents MultiVeStA, a statistical analysis tool which can be easily integrated with existing discrete event simulators, enriching them with efficient distributed statistical analysis and SMC capabilities
Learning Robust and Correct Controllers from Signal Temporal Logic Specifications Using BarrierNet
In this paper, we consider the problem of learning a neural network
controller for a system required to satisfy a Signal Temporal Logic (STL)
specification. We exploit STL quantitative semantics to define a notion of
robust satisfaction. Guaranteeing the correctness of a neural network
controller, i.e., ensuring the satisfaction of the specification by the
controlled system, is a difficult problem that received a lot of attention
recently. We provide a general procedure to construct a set of trainable High
Order Control Barrier Functions (HOCBFs) enforcing the satisfaction of formulas
in a fragment of STL. We use the BarrierNet, implemented by a differentiable
Quadratic Program (dQP) with HOCBF constraints, as the last layer of the neural
network controller, to guarantee the satisfaction of the STL formulas. We train
the HOCBFs together with other neural network parameters to further improve the
robustness of the controller. Simulation results demonstrate that our approach
ensures satisfaction and outperforms existing algorithms.Comment: Submitted to CDC 202
DALiuGE: A Graph Execution Framework for Harnessing the Astronomical Data Deluge
The Data Activated Liu Graph Engine - DALiuGE - is an execution framework for
processing large astronomical datasets at a scale required by the Square
Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex
data reduction pipelines consisting of both data sets and algorithmic
components and an implementation run-time to execute such pipelines on
distributed resources. By mapping the logical view of a pipeline to its
physical realisation, DALiuGE separates the concerns of multiple stakeholders,
allowing them to collectively optimise large-scale data processing solutions in
a coherent manner. The execution in DALiuGE is data-activated, where each
individual data item autonomously triggers the processing on itself. Such
decentralisation also makes the execution framework very scalable and flexible,
supporting pipeline sizes ranging from less than ten tasks running on a laptop
to tens of millions of concurrent tasks on the second fastest supercomputer in
the world. DALiuGE has been used in production for reducing interferometry data
sets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide
Spectral Radioheliograph; and is being developed as the execution framework
prototype for the Science Data Processor (SDP) consortium of the Square
Kilometre Array (SKA) telescope. This paper presents a technical overview of
DALiuGE and discusses case studies from the CHILES and MUSER projects that use
DALiuGE to execute production pipelines. In a companion paper, we provide
in-depth analysis of DALiuGE's scalability to very large numbers of tasks on
two supercomputing facilities.Comment: 31 pages, 12 figures, currently under review by Astronomy and
Computin
Recommended from our members
Propagation of Pericentral Necrosis During Acetaminophen-Induced Liver Injury: Evidence for Early Interhepatocyte Communication and Information Exchange.
Acetaminophen (APAP)-induced liver injury is clinically significant, and APAP overdose in mice often serves as a model for drug-induced liver injury in humans. By specifying that APAP metabolism, reactive metabolite formation, glutathione depletion, and mitigation of mitochondrial damage within individual hepatocytes are functions of intralobular location, an earlier virtual model mechanism provided the first concrete multiattribute explanation for how and why early necrosis occurs close to the central vein (CV). However, two characteristic features could not be simulated consistently: necrosis occurring first adjacent to the CV, and subsequent necrosis occurring primarily adjacent to hepatocytes that have already initiated necrosis. We sought parsimonious model mechanism enhancements that would manage spatiotemporal heterogeneity sufficiently to enable meeting two new target attributes and conducted virtual experiments to explore different ideas for model mechanism improvement at intrahepatocyte and multihepatocyte levels. For the latter, evidence supports intercellular communication via exosomes, gap junctions, and connexin hemichannels playing essential roles in the toxic effects of chemicals, including facilitating or counteracting cell death processes. Logic requiring hepatocytes to obtain current information about whether downstream and lateral neighbors have triggered necrosis enabled virtual hepatocytes to achieve both new target attributes. A virtual hepatocyte that is glutathione-depleted uses that information to determine if it will initiate necrosis. When a less-stressed hepatocyte is flanked by at least two neighbors that have triggered necrosis, it too will initiate necrosis. We hypothesize that the resulting intercellular communication-enabled model mechanism is analogous to the actual explanation for APAP-induced hepatotoxicity at comparable levels of granularity
Modelling and analyzing adaptive self-assembling strategies with Maude
Building adaptive systems with predictable emergent behavior is a challenging task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures, to programming paradigms, to analysis techniques. We recently proposed a conceptual framework for adaptation centered around the role of control data. In this paper we show that it can be naturally realized in a reflective logical language like Maude by using the Reflective Russian Dolls model. Moreover, we exploit this model to specify, validate and analyse a prominent example of adaptive system: robot swarms equipped with self-assembly strategies. The analysis exploits the statistical model checker PVeStA
- …