17 research outputs found
Differentially Testing Soundness and Precision of Program Analyzers
In the last decades, numerous program analyzers have been developed both by
academia and industry. Despite their abundance however, there is currently no
systematic way of comparing the effectiveness of different analyzers on
arbitrary code. In this paper, we present the first automated technique for
differentially testing soundness and precision of program analyzers. We used
our technique to compare six mature, state-of-the art analyzers on tens of
thousands of automatically generated benchmarks. Our technique detected
soundness and precision issues in most analyzers, and we evaluated the
implications of these issues to both designers and users of program analyzers
Automated Random Testing of Numerical Constrained Types
International audienceWe propose an automated testing framework based on constraint programming techniques. Our framework allows the developer to attach a numerical constraint to a type that restricts its set of possible values. We use this constraint as a partial specification of the program, our goal being to derive property-based tests on such annotated programs. To achieve this, we rely on the user-provided constraints on the types of a program: for each function f present in the program, that returns a constrained type, we generate a test. The tests consists of generating uniformly pseudo-random inputs and checking whether f 's output satisfies the constraint. We are able to automate this process by providing a set of generators for primitive types and generator combinators for composite types. To derive generators for constrained types, we present in this paper a technique that characterizes their inhabitants as the solution set of a numerical CSP. This is done by combining abstract interpretation and constraint solving techniques that allow us to efficiently and uniformly generate solutions of numerical CSP. We validated our approach by implementing it as a syntax extension for the OCaml language
Random Testing For Language Design
Property-based random testing can facilitate formal verification, exposing errors early on in the proving process and guiding users towards correct specifications and implementations. However, effective random testing often requires users to write custom generators for well-distributed random data satisfying complex logical predicates, a task which can be tedious and error prone.
In this work, I aim to reduce the cost of property-based testing by making such generators easier to write, read and maintain. I present a domain-specific language, called Luck, in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated.
I also aim to increase the applicability of testing to formal verification by bringing advanced random testing techniques to the Coq proof assistant. I describe QuickChick, a QuickCheck clone for Coq, and improve it by incorporating ideas explored in the context of Luck
to automatically derive provably correct generators for data constrained by inductive relations.
Finally, I evaluate both QuickChick and Luck in a variety of complex case studies from programming languages literature, such as information-flow abstract machines and type systems for lambda calculi
Automated Derivation of Random Generators for Algebraic Data Types
Many testing techniques such as generational fuzzing or random property-based testing require the existence of some sort of random generation process for the values used as test inputs. Implementing such generators is usually a task left to end-users, who do their best to come up with somewhat sensible implementations after several iterations of trial and error. This necessary effort is of no surprise, implementing good random data generators is a hard task. It requires deep knowledge about both the domain of the data being generated, as well as the behavior of the stochastic process generating such data. In addition, when the data we want to generate has a large number of possible variations, this process is not only intricate, but also very cumbersome. To mitigate this issues, this thesis explores different ideas for automatically deriving random generators based on existing static information. In this light, we design and implement different derivation algorithms in Haskell for obtaining random generators of values encoded using Algebraic Data Types (ADTs). Although there exists other tools designed directly or indirectly for this very purpose, they are not without disadvantages. In particular, we aim to tackle the lack of flexibility and static guarantees in the distribution induced by derived generators. We show how automatically derived generators for ADTs can be framed using a simple yet powerful stochastic model. This models can be used to obtain analytical guarantees about the distribution of values produced by the derived generators. This, in consequence, can be used to optimize the stochastic generation parameters of the derived generators towards target distributions set by the user, providing more flexible derivation mechanisms
Towards model checking electrum specifications with LTSmin
Dissertação de mestrado integrado em Engenharia InformáticaModel checking é uma técnica comum de verificação; garante a consistência e integridade de
qualquer sistema fazendo uma exploração exaustiva de todos os possĂveis estados. Devido Ă
grande quantidade de intercalações possĂveis entre eventos, modelos de sistemas distribuĂdos
muitas vezes acabam por gerar um número de estados muito grande. Nesta dissertação
vamos explorar os efeitos de partial order reduction — uma técnica para mitigar os efeitos
da explosão de estados — implementando uma linguagem semelhante ao Electrum com
LTSmin. Vamos também propor um event layer por cima do Electrum e uma análise sintática
para extrair informação necessária para que esta técnica possa ser implementada.Model checking is a common verification technique to guarantee the consistency and integrity
of any system by an exhaustive exploration of all possible states. Due to the large amount of
interleavings, models on distributed systems often end up with a huge state-space. In this
dissertation we will explore the effects of partial order reduction — a technique to mitigate
the effects of this state-explosion problem — by implementing an electrum-like language
with LTSmin. We will also propose an event layer over Electrum and a syntactic analysis to
extract valuable information for this technique to be implemented.This work is financed by the ERDF – European Regional Development Fund through
the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020
Programme and by National Funds through the Portuguese funding agency, FCT - Fundação
para a CiĂŞncia e a Tecnologia, within project POCI-01-0145-FEDER-01682
Implementation of lambda expressions evaluator
l-kalkul je stěžejnĂ koncept poÄŤĂtaÄŤovĂ˝ch vÄ›d. Jako takovĂ˝ je uÄŤen na vÄ›tšinÄ› univerzit vyuÄŤujĂcĂch informatiku a poÄŤĂtaÄŤovĂ© vÄ›dy vÄŤetnÄ› FIT ÄŚVUT. Pro mnoho studentĹŻ mĹŻĹľe bĂ˝t studium l-kalkulu a pochopenĂ jeho vĂ˝znamu a dopadu na souÄŤasnĂ© programovacĂ jazyky obtĂĹľnou Ăşlohou. Tato práce vytvářà evaluátor l-kalkulu a jeho front-end navrĹľenĂ˝ tak, aby prezentoval l-kalkul jako programovacĂ jazyk a umoĹľnil snadnou integraci do vĂ˝ukovĂ˝ch materiálĹŻ.l-calculus is a fundamental concept in computer science and as such is taught at almost all universities with a computer science programme, including FIT CTU. But for many students, learning the l-calculus and understanding its significance and impact on programming languages is a challenging task. This thesis describes a l-calculus evaluator and its front-end designed to help students understand l-calculus by treating it more like a programming language and by effortless integration with existing course materials