23 research outputs found
Adventures in monitorability: From branching time to linear time and back again.
This paper establishes a comprehensive theory of runtime monitorability for Hennessy-Milner logic with recursion, a very expressive variant of the modal µ-calculus. It investigates the monitorability of that logic with a linear-time semantics and then compares the obtained results with ones that were previously presented in the literature for a branching-time setting. Our work establishes an expressiveness hierarchy of monitorable fragments of Hennessy-Milner logic with recursion in a linear-time setting and exactly identifies what kinds of guarantees can be given using runtime monitors for each fragment in the hierarchy. Each fragment is shown to be complete, in the sense that it can express all properties that can be monitored under the corresponding guarantees. The study is carried out using a principled approach to monitoring that connects the semantics of the logic and the operational semantics of monitors. The proposed framework supports the automatic, compositional synthesis of correct monitors from monitorable properties
Dynamic contracts for verification and enforcement of real-time systems properties
Programa de Doutoramento em Informática (MAP-i) das Universidades do Minho, de Aveiro e do PortoRuntime veri cation is an emerging discipline that investigates methods and tools to enable
the veri cation of program properties during the execution of the application. The goal is
to complement static analysis approaches, in particular when static veri cation leads to
the explosion of states. Non-functional properties, such as the ones present in real-time
systems are an ideal target for this kind of veri cation methodology, as are usually out of
the range of the power and expressiveness of classic static analyses.
Current real-time embedded systems development frameworks lack support for the veri -
cation of properties using explicit time where counting time (i.e., durations) may play an
important role in the development process. Temporal logics targeting real-time systems
are traditionally undecidable. Based on a restricted fragment of Metric temporal logic with
durations (MTL-R), we present the proposed synthesis mechanisms 1) for target systems
as runtime monitors and 2) for SMT solvers as a way to get, respectively, a verdict at
runtime and a schedulability problem to be solved before execution. The later is able to
solve partially the schedulability analysis for periodic resource models and xed priority
scheduler algorithms. A domain speci c language is also proposed in order to describe
such schedulability analysis problems in a more high level way.
Finally, we validate both approaches, the rst using empirical scheduling scenarios for unimulti-
processor settings, and the second using the use case of the lightweight autopilot
system Px4/Ardupilot widely used for industrial and entertainment purposes. The former
also shows that certain classes of real-time scheduling problems can be solved, even though
without scaling well. The later shows that for the cases where the former cannot be used,
the proposed synthesis technique for monitors is well applicable in a real world scenario
such as an embedded autopilot
ight stack.A verificação do tempo de execução e uma disciplina emergente que investiga métodos e ferramentas para permitir a verificação de propriedades do programa durante a execução da aplicação. O objetivo é complementar abordagens de analise estática, em particular quando a verificação estática se traduz em explosão de estados. As propriedades não funcionais, como as que estão presentes em sistemas em tempo real, são um alvo ideal para este tipo de metodologia de verificação, como geralmente estão fora do alcance do poder e expressividade das análises estáticas clássicas.
As atuais estruturas de desenvolvimento de sistemas embebidos para tempo real não possuem suporte para a verificação de propriedades usando o tempo explicito onde a contagem de tempo (ou seja, durações) pode desempenhar um papel importante no processo de desenvolvimento. As logicas temporais que visam sistemas de tempo real são tradicionalmente indecidíveis. Com base num fragmento restrito de MTL-R (metric temporal logic with durations), apresentaremos os mecanismos de síntese 1) para sistemas alvo como monitores de tempo de execução e 2) para solvers SMT como forma de obter, respetivamente, um veredicto em tempo de execução e um problema de escalonamento para ser resolvido antes da execução. O ultimo é capaz de resolver parcialmente a analise de escalonamento para modelos de recursos periódicos e ainda para algoritmos de escalonamento de prioridade fixa. Propomos também uma linguagem especifica de domínio para descrever esses mesmos problemas de analise de escalonamento de forma mais geral e sucinta.
Finalmente, validamos ambas as abordagens, a primeira usando cenários de escalonamento empírico para sistemas uni- multi-processador e a segunda usando o caso de uso do sistema de piloto automático leve Px4/Ardupilot amplamente utilizado para fins industriais e de entretenimento. O primeiro mostra que certas classes de problemas de escalonamento em tempo real podem ser solucionadas, embora não seja escalável. O ultimo mostra que, para os casos em que a primeira opção não possa ser usada, a técnica de síntese proposta para monitores aplica-se num cenário real, como uma pilha de voo de um piloto automático embebido.This thesis was partially supported by National Funds through FCT/MEC (Portuguese
Foundation for Science and Technology) and co- nanced by ERDF (European Regional
Development Fund) under the PT2020 Partnership, within the CISTER Research Unit
(CEC/04234); FCOMP-01-0124-FEDER-015006 (VIPCORE) and FCOMP-01-0124-FEDER-
020486 (AVIACC); also by FCT and EU ARTEMIS JU, within project ARTEMIS/0003/2012,
JU grant nr. 333053 (CONCERTO); and by FCT/MEC and the EU ARTEMIS JU within
project ARTEMIS/0001/2013 - JU grant nr. 621429 (EMC2)
Una aproximación para representar estándares de seguridad con una herramienta de ingeniería de requisitos basada en ontologías
Los sistemas críticos de seguridad son aquellos sistemas cuyo fallo puede ocasionar pérdidas de vidas, daños materiales significativos o daños al medio ambiente.
Los sistemas críticos deben cumplir con normas de seguridad y estándares de seguridad como una forma de garantizar que no pueden provocar riesgos indebidos para las personas, la propiedad o el medio ambiente. Un estándar de seguridad ('safety standard') es un documento que recoge un conjunto de buenas prácticas, acordadas por un consorcio de empresas y profesionales, para el desarrollo y aseguramiento de sistemas críticos de seguridad.
El cumplimiento de las normas de seguridad es una actividad muy exigente, ya que los estándares pueden constar de cientos de páginas y los profesionales generalmente tienen que demostrar el cumplimiento de miles de criterios relacionados con la seguridad.
Estos documentos suelen ser largos, ambiguos, y difíciles de entender, por lo que varios expertos recomiendan su representación explícita y estructurada para facilitar la comprensión y aplicación de estos estándares.
Dado que la realización de estas representaciones puede ser compleja, es aconsejable utilizar herramientas que la apoyen.
El objetivo de este TFG es definir una aproximación para representar estándares de seguridad en KM, una herramienta de ingeniería de requisitos basada en ontologías que se utiliza actualmente en industria para representar, por ejemplo, los requisitos y la estructura de sistemas.
La aproximación utilizará además como base las propuestas existentes más recientes para el modelado de estándares de seguridad.Doble Grado en Ingeniería Informática y Administración de Empresa
Recommended from our members
Multivariate linear mixed models for statistical genetics
In the last decade, genome-wide association studies have helped to advance our understanding of the genetic architecture of many important traits, including diseases. However, the statistical analysis of genotype-phenotype associations remains challenging due to multiple factors. First, many traits have polygenic architectures, which means that they are controlled by a large number of variants with small individual effects. Second, as increasingly deep phenotype data are being generated there is a need for multivariate analysis approaches to leverage multiple related phenotypes while retaining computational efficiency. Additionally, genetic analyses are confronted by strong confounding factors that can create spurious associations when not properly accounted for in the statistical model. We here derive more flexible methods that allow integrating genetic effects across variants and multiple quantitative traits. To do so, we build on the classical linear mixed model (LMM), a widely adopted framework for genetic studies.
The first contribution of this thesis is mtSet, an efficient mixed-model approach that enables genome-wide association testing between sets of genetic variants and multiple traits while accounting for confounding factors. In both simulations and real-data applications we demonstrate that mtSet effectively combines the advantages of variant-set and multi-trait analyses.
Next, we present a new model for gene-context interactions that builds on mtSet. The proposed interaction set test (iSet) yields increased statistical power for detecting polygenic interactions. Additionally, iSet enables the identification of genetic loci that are associated with different configurations of causal variants across contexts. After benchmarking the proposed method using simulated data, we consider two applications to real datasets, where we investigate genetic effects on gene expression across different cellular contexts and sex-specific genetic effects on lipid levels.
Finally, we describe LIMIX, a software framework for the flexible implementation of different LMMs. Most of the models considered in this thesis, including mtSet and iSet, are implemented and available in LIMIX. A unique aspect of the software is an inference framework that allows a large class of genetic models to be defined and, in many cases, to be efficiently fitted by exploiting specific algebraic properties. We demonstrate the utility of this software suite in two applied collaboration projects.
Taken together, this thesis demonstrates the value of flexible and integrative modelling in genetics and contributes new statistical methods for genetic analysis. These approaches generalise previous models, yet retain the computational efficiency that is needed to tackle large genetic datasets.EMBL-European Bioinformatics Institut
Algorithmic debugging for complex lazy functional programs
An algorithmic debugger finds defects in programs by systematic search. It relies on the programmer to direct the search by answering a series of yes/no questions about the correctness of specific function applications and their results. Existing algorithmic debuggers for a lazy functional language work well for small simple programs but cannot be used to locate defects in complex programs for two reasons: Firstly, to collect the information required for algorithmic debugging existing debuggers use different but complex implementations. Therefore, these debuggers are hard to maintain and do not support all the latest language features. As a consequence, programs with unsupported language features cannot be debugged. Also inclusion of a library using unsupported languages features can make algorithmic debugging unusable even when the programmer is not interested in debugging the library. Secondly, algorithmic debugging breaks down when the size or number of questions is too great for the programmer to handle. This is a pity, because, even though algorithmic debugging is a promising method for locating defects, many real-world programs are too complex for the method to be usuable. I claim that the techniques in in this thesis make algorithmic debugging useable for a much more complex lazy functional programs. I present a novel method for collecting the information required for algorithmically debugging a lazy functional program. The method is non-invasive, uses program annotations in suspected modules only and has a simple implementation. My method supports all of Haskell, including laziness, higher-order functions and exceptions. Future language extensions can be supported without changes, or with minimal changes, to the implementation of the debugger. With my method the programmer can focus on untrusted code -- lots of trusted libraries are unaffected. This makes traces, and hence the amount of questions that needs to be answered, more manageable. I give a type-generic definition to support custom types defined by the programmer. Furthermore, I propose a method that re-uses properties to answer automatically some of the questions arising during algorithmic debugging, and to replace others by simpler questions. Properties may already be present in the code for testing; the programmer can also encode a specification or reference implementation as a property, or add a new property in response to a statement they are asked to judge
Probabilistic Semantics: Metric and Logical Character\ua8ations for Nondeterministic Probabilistic Processes
In this thesis we focus on processes with nondeterminism and probability in the PTS model, and we propose novel techniques to study their semantics, in terms of both classic behavioral relations and the more recent behavioral metrics.
Firstly, we propose a method for decomposing modal formulae in a probabilistic extension of the Hennessy-Milner logic. This decomposition method allows us to derive the compositional properties of probabilistic (bi)simulations.
Then, we propose original notions of metrics measuring the disparities in the behavior of processes with respect to (decorated) trace and testing semantics.
To capture the differences in the expressive power of the metrics we order them by the relation `makes processes further than'.
Thus, we obtain the first spectrum of behavioral metrics on the PTS model.
From this spectrum we derive an analogous one for the kernels of the metrics, ordered by the relation `makes strictly less identification than'.
Finally, we introduce a novel technique for the logical characterization of both behavioral metrics and their kernels, based on the notions of mimicking formula and distance on formulae.
This kind of characterization allows us to obtain the first example of a spectrum of distances on processes obtained directly from logics.
Moreover, we show that the kernels of the metrics can be characterized by simply comparing the mimicking formulae of processes
Arrows for knowledge-based circuits
Knowledge-based programs (KBPs) are a formalism for directly relating agents' knowledge and behaviour in a way that has proven useful for specifying distributed systems. Here we present a scheme for compiling KBPs to executable automata in finite environments with a proof of correctness in Isabelle/HOL. We use Arrows, a functional programming abstraction, to structure a prototype domain-specific synchronous language embedded in Haskell. By adapting our compilation scheme to use symbolic representations we can apply it to several examples of reasonable size