5,274 research outputs found
Programming language complexity analysis and its impact on Checkmarx activities
Dissertação de mestrado integrado em Informatics EngineeringTools for Programming Languages processing, like Static Analysers (for instance, a Static
Application Security Testing (SAST) tool, one of Checkmarx’s main products), must be
adapted to cope with a given input when the source programming language changes.
Complexity of the programming language is one of the key factors that deeply impact the
time of giving support to it.
This Master’s Project aims at proposing an approach for assessing language complexity,
measuring, at a first stage, the complexity of its underlying context-free grammar (CFG).
From the analysis of concrete case studies, factors have been identified that make the
support process more time-consuming, in particular in the stages of language recognition
and in the transformation to an abstract syntax tree (AST). In this sense, at a second stage, a
set of language features is analysed in order to take into account the referred factors that
also impact on the language processing.
The main objective of the Master’s work here reported is to help development teams to
improve the estimation of time and effort needed to adapt the SAST Tool in order to cope
with a new programming language.
In this dissertation a tool is proposed, that allows for the evaluation of the complexity of a
language based on a set of metrics to classify the complexity of its grammar, along with a set
of language properties. The tool compares the new language complexity so far determined
with previously supported languages, to predict the effort to process the new language.Ferramentas para processamento de Linguagens de Programação, como os Analisadores
Estáticos (por exemplo, uma ferramenta de Testes Estáticos para Análise da Segurança de
Aplicações, um dos principais produtos da Checkmarx), devem ser adaptadas para lidar
com uma dada entrada quando a linguagem de programação de origem muda.
A complexidade da linguagem de programação é um dos fatores-chave que influencia
profundamente o tempo de suporte à mesma.
Este projeto de Mestrado visa propor uma abordagem para avaliar a complexidade de uma
linguagem de programação, medindo, numa primeira fase, a complexidade da gramática
independente de contexto (GIC) subjacente.
A partir da análise de casos concretos, foram identificados fatores (relacionados como
facilidades especÃficas oferecidas pela linguagem) que tornam o processo de suporte mais
demorado, em particular nas fases de reconhecimento da linguagem e na transformação para
uma árvore de sintaxe abstrata (AST). Neste sentido, numa segunda fase, foi identificado
um conjunto de caracterÃsticas linguÃsticas de modo a ter em conta os referidos fatores que
também têm impacto no processamento da linguagem.
O principal objetivo do trabalho de mestrado aqui relatado é auxiliar as equipas de
desenvolvimento a melhorar a estimativa do tempo e esforço necessários para adaptar a
ferramenta SAST de modo a lidar com uma nova linguagem de programação.
Como resultado deste projeto, tal como se descreve na dissertação, é proposta uma
ferramenta, que permite a avaliação da complexidade de uma linguagem com base num
conjunto de métricas para classificar a complexidade da sua gramática, e em um conjunto
de propriedades linguÃsticas. A ferramenta compara a complexidade da nova linguagem,
avaliada por aplicação do processo referido, com as linguagens anteriormente suportadas,
para prever o esforço para processar a nova linguagem
A Temporal Logic for Hyperproperties
Hyperproperties, as introduced by Clarkson and Schneider, characterize the
correctness of a computer program as a condition on its set of computation
paths. Standard temporal logics can only refer to a single path at a time, and
therefore cannot express many hyperproperties of interest, including
noninterference and other important properties in security and coding theory.
In this paper, we investigate an extension of temporal logic with explicit path
variables. We show that the quantification over paths naturally subsumes other
extensions of temporal logic with operators for information flow and knowledge.
The model checking problem for temporal logic with path quantification is
decidable. For alternation depth 1, the complexity is PSPACE in the length of
the formula and NLOGSPACE in the size of the system, as for linear-time
temporal logic
Parallel Architectures for Planetary Exploration Requirements (PAPER)
The Parallel Architectures for Planetary Exploration Requirements (PAPER) project is essentially research oriented towards technology insertion issues for NASA's unmanned planetary probes. It was initiated to complement and augment the long-term efforts for space exploration with particular reference to NASA/LaRC's (NASA Langley Research Center) research needs for planetary exploration missions of the mid and late 1990s. The requirements for space missions as given in the somewhat dated Advanced Information Processing Systems (AIPS) requirements document are contrasted with the new requirements from JPL/Caltech involving sensor data capture and scene analysis. It is shown that more stringent requirements have arisen as a result of technological advancements. Two possible architectures, the AIPS Proof of Concept (POC) configuration and the MAX Fault-tolerant dataflow multiprocessor, were evaluated. The main observation was that the AIPS design is biased towards fault tolerance and may not be an ideal architecture for planetary and deep space probes due to high cost and complexity. The MAX concepts appears to be a promising candidate, except that more detailed information is required. The feasibility for adding neural computation capability to this architecture needs to be studied. Key impact issues for architectural design of computing systems meant for planetary missions were also identified
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
Quantitative reactive modeling and verification
Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments
- …