17 research outputs found

    Selective applicative functors & probabilistic programming

    Get PDF
    Dissertação de mestrado integrado em Informatics EngineeringIn functional programming, selective applicative functors (SAF) are an abstraction between applicative functors and monads. This abstraction requires all effects to be statically declared, but provides a way to select which effects to execute dynamically. SAF have been shown to be a useful abstraction in several examples, including two industrial case studies. Selective functors have been used for their static analysis capabilities. The collection of information about all possible effects in a computation and the fact that they enable speculative execution make it possible to take advantage to describe probabilistic computations instead of using monads. In particular, selective functors appear to provide a way to obtain a more efficient implementation of probability distributions than monads. This dissertation addresses a probabilistic interpretation for the arrow and selective abstractions in the light of the linear algebra of programming discipline, as well as exploring ways of offering SAF capabilities to probabilistic programming, by exposing sampling as a concurrency problem. As a result, provides a Haskell type-safe matrix library capable of expressing probability distributions and probabilistic computations as typed matrices, and a probabilistic programming eDSL that explores various techniques in order to offer a novel, performant solution to probabilistic functional programming.Em programação funcional, os functores aplicativos seletivos (FAS) são uma abstração entre functores aplicativos e monades. Essa abstração requer que todos os efeitos sejam declarados estaticamente, mas fornece uma maneira de selecionar quais efeitos serão executados dinamicamente. FAS têm se mostrado uma abstração útil em vários exemplos, incluindo dois estudos de caso industriais. Functores seletivos têm sido usados pela suas capacidade de análise estática. O conjunto de informações sobre todos os efeitos possíveis numa computação e o facto de que eles permitem a execução especulativa tornam possível descrever computações probabilísticas. Em particular, functores seletivos parecem oferecer uma maneira de obter uma implementação mais eficiente de distribuições probabilisticas do que monades. Esta dissertação aborda uma interpretação probabilística para as abstrações Arrow e Selective à luz da disciplina da álgebra linear da programação, bem como explora formas de oferecer as capacidades dos FAS para programação probabilística, expondo sampling como um problema de concorrência. Como resultado, fornece uma biblioteca de matrizes em Haskell, capaz de expressar distribuições de probabilidade e cálculos probabilísticos como matrizes tipadas e uma eDSL de programação probabilística que explora várias técnicas, com o obejtivo de oferecer uma solução inovadora e de alto desempenho para a programação funcional probabilística

    Automatically Comparing Memory Consistency Models

    Get PDF
    A memory consistency model (MCM) is the part of a programming language or computer architecture specification that defines which values can legally be read from shared memory locations. Because MCMs take into account various optimisations employed by archi- tectures and compilers, they are often complex and counterintu- itive, which makes them challenging to design and to understand. We identify four tasks involved in designing and understanding MCMs: generating conformance tests, distinguishing two MCMs, checking compiler optimisations, and checking compiler mappings. We show that all four tasks are instances of a general constraint-satisfaction problem to which the solution is either a program or a pair of programs. Although this problem is intractable for automatic solvers when phrased over programs directly, we show how to solve analogous constraints over program executions, and then construct programs that satisfy the original constraints. Our technique, which is implemented in the Alloy modelling framework, is illustrated on several software- and architecture-level MCMs, both axiomatically and operationally defined. We automatically recreate several known results, often in a simpler form, including: distinctions between variants of the C11 MCM; a failure of the ‘SC-DRF guarantee’ in an early C11 draft; that x86 is ‘multi-copy atomic’ and Power is not; bugs in common C11 compiler optimisations; and bugs in a compiler mapping from OpenCL to AMD-style GPUs. We also use our technique to develop and validate a new MCM for NVIDIA GPUs that supports a natural mapping from OpenCL

    Actes des Cinquièmes journées nationales du Groupement De Recherche CNRS du Génie de la Programmation et du Logiciel

    Get PDF
    National audienceCe document contient les actes des Cinquièmes journées nationales du Groupement De Recherche CNRS du Gé}nie de la Programmation et du Logiciel (GDR GPL) s'étant déroulées à Nancy du 3 au 5 avril 2013. Les contributions présentées dans ce document ont été sélectionnées par les différents groupes de travail du GDR. Il s'agit de résumés, de nouvelles versions, de posters et de démonstrations qui correspondent à des travaux qui ont déjà été validés par les comités de programmes d'autres conférences et revues et dont les droits appartiennent exclusivement à leurs auteurs

    Development and Evaluation of Methodologies for Vulnerability Analysis of Ad-hoc Routing Protocols

    Get PDF
    This thesis presents a number methodologies for computer assisted vulnerability analysis of routing protocols in ad-hoc networks towards the goal of automating the process of finding vulnerabilities (possible attacks) on such network routing protocols and correcting the protocols. The methodologies developed are (each) based on a different representation (model) of the routing protocol, which model predicated the quantitative methods and algorithms used. Each methodology is evaluated with respect to effectiveness feasibility and possibility of application to realistically sized networks. The first methodology studied is based on formal models of the protocols and associated symbolic partially ordered model checkers. Using this methodology, a simple attack in unsecured AODV is demonstrated. An extension of the Strands model is developed which is suitable for such routing protocols. The second methodology is based on timed-probabilistic formal models which is necessary due to the probabilistic nature of ad-hoc routing protocols. This second methodolgy uses natural extensions of the first one. A nondeterministic-timing model based on partially ordered events is considered for application towards the model checking problem. Determining probabilities within this structure requires the calculation of the volume of a particular type of convex volume, which is known to be #P-hard. A new algorithm is derived, exploiting the particular problem structure, that can be used to reduce the amount of time used to compute these quantities over conventional algorithms. We show that timed-probabilistic formal models can be linked to trace-based techniques by sampling methods, and conversely how execution traces can serve as starting points for formal exploration of the state space. We show that an approach combining both trace-based and formal methods can have faster convergence than either alone on a set of problems. However, the applicability of both of these techniques to ad-hoc network routing protocols is limited to small networks and relatively simple attacks. We provide evidence to this end. To address this limitation, a final technique employing only trace-based methods within an optimization framework is developed. In an application of this third methodology, it is shown that it can be used to evaluate the effects of a simple attack on OLSR. The result can be viewed (from a certain perspective) as an example of automatically discovering a new attack on the OLSR routing protocol

    Automated Reasoning in Quantified Modal and Temporal Logics

    Get PDF
    Centre for Intelligent Systems and their ApplicationsThis thesis is about automated reasoning in quantified modal and temporal logics, with an application to formal methods. Quantified modal and temporal logics are extensions of classical first-order logic in which the notion of truth is extended to take into account its necessity or equivalently, in the temporal setting, its persistence through time. Due to their high complexity, these logics are less widely known and studied than their propositional counterparts. Moreover, little so far is known about their mechanisability and usefulness for formal methods. The relevant contributions of this thesis are threefold: firstly, we devise a sound and complete set of sequent calculi for quantified modal logics; secondly, we extend the approach to the quantified temporal logic of linear, discrete time and develop a framework for doing automated reasoning via Proof Planning in it; thirdly, we show a set of experimental results obtained by applying the framework to the problem of Feature Interactions in telecommunication systems. These results indicate that (a) the problem can be concisely and effectively modeled in the aforementioned logic, (b) proof planning actually captures common structures in the related proofs, and (c) the approach is viable also from the point of view of efficiency

    Second Conference on Artificial Intelligence for Space Applications

    Get PDF
    The proceedings of the conference are presented. This second conference on Artificial Intelligence for Space Applications brings together a diversity of scientific and engineering work and is intended to provide an opportunity for those who employ AI methods in space applications to identify common goals and to discuss issues of general interest in the AI community

    FICCS; A Fact Integrity Constraint Checking System

    Get PDF

    Design method and management utility enabling the concurrent exercise of distributed expertise

    Get PDF
    Concurrency of engineering activities requires a utility allowing designers, working at all phases of design to: communicate the design requirements to specialists and external technologists, elicit responses and integrate the resulting actions with the design solution; acquire resources which are functionally and geographically distributed; communicate a formally agreed product description to the collaborating agents. The creation of such a utility is presented here which employs techniques of knowledge engineering to represent the entities and methods used in design. The utility manages representations within existing standards and methods, including communication at interfaces, resolves constraint conflict during design by referring dependency relationships, is unitary and can be made recursive in its operation. The Glasgow Utility for the Integration of Design (GUIDE) employs the methods of knowledge engineering to secure a basis for design by a multidisciplinary team, the membership of which may be distributed and will vary as the product emerges through successive design phases. GUIDE offers designers a range of design functions which may be applied to the task performed through a single interface and without operational prescription. GUIDE maintains a single product description, which includes integrally a record of the entire design activity. It also provides distributed data base access and communications facilities. GUIDE employs a representation scheme which involves structures, atoms and methods as its elements. Additional characteristics have been invested in these elements to provide for their manipulation and control. With GUIDE and the tools it provides designers can create graphical, data and information related working entities and involve active processes. Process entities may invoke proprietary tools, provide translation at their interfaces and sustain the required communication with various engineering and product centred data bases. Operations on design entities and information generation processes are managed by control functions which can also cause data transformations. GUIDE has the capacity to aggregate generic, modularly defined knowledge representations to create higher level, formally constructed unique design solutions or part solutions and to manage associations between design entities and the constraints affecting them. GUIDE'S design record - the route taken and that structured information generated during design - provides a mechanism for the accumulation of expertise which can be used in future designs. In addition to the actual outputs of a design, such as the part description in its various forms, a designer could obtain information concerning the design tasks undertaken and their sequence. The design record enables design traceability and audit of the design process, sustains status evaluations and provides for regression. The concepts, design and implementation of GUIDE are described. Three examples are used to illustrate GUIDE'S capacity to support the operations of design teams, the constant availability of a multidimensional product model which exposes tasks more quickly and precisely and the ability logically to collocate design teams through product model coincidence. GUIDE provides an extension to knowledge representation using frames through the characteristics of the elements it employs and by the way its control mechanism manages operations upon and communication between them. Links formed between elements and between elements and methods can be described in a structured way. Constraints are represented as methods which can evolve over time and may influence the use of other GUIDE elements. Relational data bases are used to hold the knowledge representations employed and GUIDE exploits the relational architecture to physically distribute the representations and maintain their integrity. The design record contains comprehensive meta knowledge and supports the abstraction of formal generic representations from specific instances

    Formal Guaranties for Safety Critical Code Generation: the Case of Highly Variable Languages

    Get PDF
    Les fonctions de commande et de contrôle sont parmi les plus importantes des systèmes embarqués critiques utilisés dans des activités telles les transports, la santé ou la gestion de l’énergie. Leur impact potentiel sur la sûreté de fonctionnement fait de la vérification de leur correction l’un des points les plus critiques de leur développement. Cette vérification est usuellement effectuée en accord avec les normes de certification décrivant un ensemble d’objectifs à atteindre afin d’assurer un haut niveau de qualité du système et donc de prévenir l’apparition de défauts. Cette vérification du logiciel est traditionnellement basée sur de nombreux tests et des activitiés de relectures de code, toutefois les versions les plus récentes des standards de certification permettent l’utilisation de nouvelles approches de développement telles que l’ingénierie dirigée par les modèles et les méthodes formelles ainsi que l’utilisation d’outil pour assister les processus de développement. Les outils de génération automatique de code sont exploités dans la plupart des processus de développement de systèmes embarqués critiques afin d’éviter des erreurs de programmation liées à l’humain et pour assurer le respect des règles de production de code. Ces outils ayant pour vocation de remplacer les humains pour la production de code, des erreurs dans leur conception peuvent causer l’apparition d’erreurs dans le code généré. Il est donc nécessaire de vérifier que le niveau de qualité de l’outil est le même que celui du code produit en s’assurant que les objectifs spécifiées dans les normes de qualification sont couverts. Nos travaux visent à exploiter l’ingénierie dirigée par les modèles et les méthodes formelles pour développer ces outils et ainsi atteindre un niveau de qualité plus élevé que les approches traditionnelles. Les fonctions critiques de commande et de contrôle sont en grande partie conçues à l’aide de langages graphiques à flot de données. Ces langages sont utilisés pour modéliser des systèmes complexes à l’aide de blocs élémentaires groupés dans des librairies de blocs. Un bloc peut être un objet logiciel sophistiqué exposant une haute variabilité tant structurelle que sémantique. Cette variabilité est à la fois liée aux valeurs des paramètres du bloc ainsi qu’à son contexte d’utilisation. Dans notre travail, nous concentrons notre attention en premier lieu sur la spécification formelle de ces blocs ainsi que sur la vérification de ces spécifications. Nous avons évalué plusieurs approches et techniques dans le but d’assurer une spécification formelle, structurellement cohérente, vérifiable et réutilisable des blocs. Nous avons finalement conçu un langage basé sur l’ingénierie dirigées par les modèles dédié à cette tâche. Ce langage s’inspire des approches des lignes de produit logiciel afin d’assurer une gestion de la variabilité des blocs à la fois correcte et assurant un passage à l’échelle. Nous avons appliqué cette approche et la vérification associée sur quelques exemples choisis de blocs issus d’applications industrielles et l’avons validé sur des prototypes logiciels que nous avons développé. Les blocs sont les principaux éléments des langages d’entrée utilisés pour la génération automatique de logiciels de commande et de contrôle. Nous montrons comment les spécifications formelles de blocs peuvent être transformées en des annotations de code afin de simplifier et d’automatiser la vérification du code généré. Les annotations de code sont vérifiées par la suite à l’aide d’outils spécialisés d’analyse statique de code. En utilisant des observateur synchrones pour exprimer des exigences de haut niveau sur les modèles en entrée du générateur, nous montrons comment la spécification formelle de blocs peut être utilisée pour la génération d’annotations de code et par la suite pour la vérification automatique des exigences. Finalement, nous montrons dans quelle mesure les spécifications de blocs permettent de générer des données de qualification tel que des exigences, des tests ou des données de simulation utilisées pour la vérification et le développement de générateurs automatiques de code. ABSTRACT : Control and command softwares play a key role in safety-critical embedded systems used for human related activities such as transportation, healthcare or energy. Their impact on safety makes the assessment of their correctness the central point in their development activities. Such systems verification activities are usually conducted according to normative certification guidelines providing objectives to be reached in order to ensure development process reliability and thus prevent flaws. Verification activities usually relies on tests and proof reading of the software but recent versions of certification guidelines are taking into account the deployment of new development paradigms such as model-based development, and formal methods; or the use of tools in assistance of the development processes. Automatic code generators are used in most safety-critical embedded systems development in order to avoid human related software production errors and to ensure the respect of development quality standards. As these tools are supposed to replace humans in the software code production activities, errors in these tools may result in embedded software flaws. It is thus in turn mandatory to ensure the same level of correctness for the tool itself than for the expected produced code. Tools verification shall be done according to qualification guidelines. We advocate in our work the use of model-based development and formal methods for the development of these tools in order to reach a higher quality level. Critical control and command software are mostly designed using graphical dataflow languages. These languages are used to express complex systems relying on atomic operations embedded in blocks that are gathered in block libraries. Blocks may be sophisticated pieces of software with highly variable structure and semantics. This variability is dependent on the values of the block parameters and of the block's context of use. In our work, we focus on the formal specification and verification of such block based languages. We experimented various techniques in order to ensure a formal, sound, verifiable and usable specification for blocks. We developed a domain specific formal model-based language specifically tailored for the specification of structure and semantics of blocks. This specification language is inspired from software product line concepts in order to ensure a correct and scalable management of the blocks variability. We have applied this specification and verification approach on chosen block examples from common industrial use cases and we have validated it on tool prototypes. Blocks are the core elements of the input language of automatic code generators used for control and command systems development. We show how our blocks formal specification can be translated as code annotations in order to ease and automate the generated code verification. Code annotations are verified using specialised static code analysis tools. Relying on synchronous observers to express high level requirements at the input model level, we show how formal block specification can also be used for the translation of high level requirements as verifiable code annotations discharged using the same specialised tooling. We finally target the assistance of code generation tools qualification activities by arguing on the ability to automatically generate qualification data such as requirements, tests or simulation results for the verification and development of automatic code generators from the formal block specification

    Extending relational model transformations to better support the verification of increasingly autonomous systems

    Get PDF
    Over the past decade the capabilities of autonomous systems have been steadily increasing. Unmanned systems are moving from systems that are predominantly remotely operated, to systems that include a basic decision making capability. This is a trend that is expected to continue with autonomous systems making decisions in increasingly complex environments, based on more abstract, higher-level missions and goals. These changes have significant implications for how these systems should be designed and engineered. Indeed, as the goals and tasks these systems are to achieve become more abstract, and the environments they operate in become more complex, are current approaches to verification and validation sufficient? Domain Specific Modelling is a key technology for the verification of autonomous systems. Verifying these systems will ultimately involve understanding a significant number of domains. This includes goals/tasks, environments, systems functions and their associated performance. Relational Model Transformations provide a means to utilise, combine and check models for consistency across these domains. In this thesis an approach that utilises relational model transformation technologies for systems verification, Systems MDD, is presented along with the results of a series of trials conducted with an existing relational model transformation language (QVT-Relations). These trials identified a number of problems with existing model transformation languages, including poorly or loosely defined semantics, differing interpretations of specifications across different tools and the lack of a guarantee that a model transformation would generate a model that was compliant with its associated meta-model. To address these problems, two related solvers were developed to assist with realising the Systems MDD approach. The first solver, MMCS, is concerned with partial model completion, where a partial model is defined as a model that does not fully conform with its associated meta-model. It identifies appropriate modifications to be made to a partial model in order to bring it into full compliance. The second solver, TMPT, is a relational model transformation engine that prioritises target models. It considers multiple interpretations of a relational transformation specification, chooses an interpretation that results in a compliant target model (if one exists) and, optionally, maximises some other attribute associated with the model. A series of experiments were conducted that applied this to common transformation problems in the published literature
    corecore