185 research outputs found
Incremental Semantic Evaluation for Interactive Systems: Inertia, Pre-emption, and Relations
Although schemes for incremental semantic evaluation have been explored and refined for more than two decades, the demands of user interaction continue to outstrip the capabilities of these schemes. The feedback produced by a semantic evaluator must support the user's programming activities: it must be structured in a way that provides the user with meaningful insight into the program (directly, or via other tools in the environment) and it must be timely. In this paper we extend an incremental attribute evaluation scheme with three techniques to better meet these demands within the context of a modeless editing system with a flexible tool integration paradigm. Efficient evaluation in the presence of syntax errors (which arise often under modeless editing) is supported by giving semantic attributes inertia: a tendency to not change unless necessary. Pre-emptive evaluation helps to reduce the delays associated with a sequence of edits, allowing an evaluator to "keep pace" with the user. Relations provide a general means to capture semantic structure (for the user, other tools, and as attributes within an evaluation) and are treated efficiently using a form of differential propagation. The combination of these three techniques meets the demands of user interaction; leaving out any one does not
Recommended from our members
Incremental Dynamic Semantics for Language-Based Programming Environments
Attribute grammars are a formal notation for expressing the static semantics of programming languages — those properties that can be derived from inspection of the program text. Attribute grammars have become popular as a mechanism for generating language-based programming environments that incrementally perform symbol resolution, type checking, code generation and derivation of other static semantic properties as the program is modified. However, attribute grammars are not suitable for expressing dynamic semantics — those properties that reflect the history of program execution and/or user interactions with the programming environment. This article presents action equations, an extension of attribute grammars suitable for specifying the static and the dynamic semantics of programming languages. It describes how action equations can be used to generate language-based programming environments that incrementally derive static and dynamic properties as the user modifies and debugs the program
Recommended from our members
GAEA Action Equations Paradigm
This technical report consists of two papers describing the GAEA action equations paradigm. Incremental Dynamic Semantics for Language-based Programming Environments explains why attribute grammars are not suitable for expressing dynamic semantics and presents action equations, an extension of attribute grammars suitable for specifying the static and the dynamic semantics of programming languages. It describes how action equations can be used to generate language-based programming environments that incrementally derive static and dynamic properties as the user modifies and debugs the program. Rapid Prototyping of Concurrent Programming Languages extends this technology to a concurrent framework. It describes an (unimplemented) system that generates a parallel interpreter for the language and provides runtime support for the synchronization primitives and other facilities in the language
Grammar-based fuzzing using input features
In grammar-based fuzz testing, a formal grammar is used to produce test inputs that are syntactically valid in order to reach the business logic of a program under test. In this setting, it is advantageous to ensure a high diversity of inputs to test more of the program's behavior. How can we characterize features that make inputs diverse and associate them with the execution of particular parts of the program? Previous work does not answer this question to satisfaction, with most attempts mainly considering superficial features defined by the structure of the grammar such as the presence of production rules or terminal symbols, regardless of their context. We present a measure of input coverage called k-path coverage, which takes into account combinations of grammar entities up to a given context depth k, and makes it possible to efficiently express, assess, and achieve input diversity. In a series of experiments, we demonstrate and evaluate how to systematically attain k-path coverage, how it correlates with code coverage and can thus be used as its predictor. By automatically inferring explicit associations between k-path features and the coverage of individual methods we further show how to generate inputs that specifically target the execution of given code locations. We expect the presented instrument of k-paths to prove useful in numerous additional applications such as assessing the quality of grammars, serving as an adequacy criterion for input test suites, enabling test case prioritization, facilitating program comprehension, and perhaps beyond.Im Bereich des grammatik-basierten Fuzz-Testens benutzt man eine formale Grammatik, um Testeingaben zu produzieren, welche syntaktisch korrekt sind, mit dem Ziel die Geschäftslogik eines zu testenden Programms zu erreichen. Dafür ist es vorteilhaft eine hohe Diversität der Eingaben zu sichern, um mehr vom Verhalten des Programms testen zu können. Wie kann man Merkmale charakterisieren, die Eingaben vielfältig machen und diese mit der Ausführung bestimmter Programmteile in Verbindung bringen? Bisherige Ansätze liefern darauf keine ausreichende Antwort, denn meistens betrachten sie oberflächliche, durch die Grammatikstruktur definierte Merkmale, wie das Vorhandensein von Produktionsregeln oder Terminalen, unabhängig von ihrem Verwendungskontext. Wir präsentieren ein Maß für Eingabeabdeckung, genannt -path Abdeckung, welche Kombinationen von Grammatikelementen bis zu einer vorgegebenen Kontexttiefe berücksichtigt und es ermöglicht, die Diversität von Eingaben effizient auszudrücken, zu bewerten und zu erzielen. Mit Experimenten zeigen und evaluieren wir, wie man gezielt -path Abdeckung erreicht und wie sie mit der Codeabdeckung zusammenhängt und diese somit vorhersagen kann. Ferner zeigen wir wie automatisches Erlernen expliziter Assoziationen zwischen Merkmalen und der Abdeckung einzelner Methoden die Erzeugung von Eingaben ermöglicht, welche auf die Ausführung bestimmter Codestellen abzielen. Wir rechnen damit, dass sich -paths als ein vielseitiges Instrument beweisen, dessen Anwendung über solche Gebiete, wie z.B. Messung der Qualität von Grammatiken und Eingabe-Testsuiten, Testfallpriorisierung, oder Erleichterung von Programmverständnis, hinausgeht
Protecting Systems From Exploits Using Language-Theoretic Security
Any computer program processing input from the user or network must validate the input. Input-handling vulnerabilities occur in programs when the software component responsible for filtering malicious input---the parser---does not perform validation adequately. Consequently, parsers are among the most targeted components since they defend the rest of the program from malicious input. This thesis adopts the Language-Theoretic Security (LangSec) principle to understand what tools and research are needed to prevent exploits that target parsers. LangSec proposes specifying the syntactic structure of the input format as a formal grammar. We then build a recognizer for this formal grammar to validate any input before the rest of the program acts on it. To ensure that these recognizers represent the data format, programmers often rely on parser generators or parser combinators tools to build the parsers. This thesis propels several sub-fields in LangSec by proposing new techniques to find bugs in implementations, novel categorizations of vulnerabilities, and new parsing algorithms and tools to handle practical data formats. To this end, this thesis comprises five parts that tackle various tenets of LangSec. First, I categorize various input-handling vulnerabilities and exploits using two frameworks. First, I use the mismorphisms framework to reason about vulnerabilities. This framework helps us reason about the root causes leading to various vulnerabilities. Next, we built a categorization framework using various LangSec anti-patterns, such as parser differentials and insufficient input validation. Finally, we built a catalog of more than 30 popular vulnerabilities to demonstrate the categorization frameworks. Second, I built parsers for various Internet of Things and power grid network protocols and the iccMAX file format using parser combinator libraries. The parsers I built for power grid protocols were deployed and tested on power grid substation networks as an intrusion detection tool. The parser I built for the iccMAX file format led to several corrections and modifications to the iccMAX specifications and reference implementations. Third, I present SPARTA, a novel tool I built that generates Rust code that type checks Portable Data Format (PDF) files. The type checker I helped build strictly enforces the constraints in the PDF specification to find deviations. Our checker has contributed to at least four significant clarifications and corrections to the PDF 2.0 specification and various open-source PDF tools. In addition to our checker, we also built a practical tool, PDFFixer, to dynamically patch type errors in PDF files. Fourth, I present ParseSmith, a tool to build verified parsers for real-world data formats. Most parsing tools available for data formats are insufficient to handle practical formats or have not been verified for their correctness. I built a verified parsing tool in Dafny that builds on ideas from attribute grammars, data-dependent grammars, and parsing expression grammars to tackle various constructs commonly seen in network formats. I prove that our parsers run in linear time and always terminate for well-formed grammars. Finally, I provide the earliest systematic comparison of various data description languages (DDLs) and their parser generation tools. DDLs are used to describe and parse commonly used data formats, such as image formats. Next, I conducted an expert elicitation qualitative study to derive various metrics that I use to compare the DDLs. I also systematically compare these DDLs based on sample data descriptions available with the DDLs---checking for correctness and resilience
An engine for coordination-based architectural reconfigurations
Master Course in Computing EngineeringIn service-oriented architectures (SOA), services are seen as loosely-coupled components interacting with each other via connection of their public interfaces. Such interaction follows a (coordination) protocol usually established at design-time.
However, in an environment where change is the rule rather than the exception, several aspects may contribute to a need for change in the way these services interact. To perceive the consequences of applying these changes beforehand is an ultimate requirement for SOA design.
The dissertation of this MSc project proposes a practical approach to model reconfigurations of service coordination patterns. Its main contributions are a language for coordination reconfiguration design and a reconfiguration engine. This project is the part of a broader research initiative aiming at formally modelling, reasoning and analysing reconfigurations of coordination patterns in the context of SOA and cloud-computing.Em arquiteturas orientadas a serviços (SOA), os serviços são vistos como componentes independentes
que interagem uns com os outros através da ligação das suas interfaces públicas.
Tal interação segue um protocolo (de coordenação) que normalmente é estabelecido durante
o design. No entanto, num ambiente onde a mudança é a regra e não a excepção, vários
factores podem contribuir para uma necessidade de alterar a forma como estes serviços interagem.
Compreender as consequências da aplicação destas alterações com antecedência é
uma exigĂŞncia final para o desenho de uma SOA.
Esta dissertação de mestrado propõe uma abordagem prática para modelar reconfigurações
de padrões de coordenação de serviços. Para tal, as reconfigurações são especificadas (antes
de serem aplicadas em tempo de execução) atravĂ©s de uma linguagem de domĂnio especĂfico
– ReCooPLa – que visa a manipulação de estruturas de coordenação de software, tipicamente
utilizadas em SOA. Posteriormente, Ă© apresentado um processador para a linguagem, construĂdo
de acordo com a abordagem tradicional para a construção de compiladores. Este
processador inclui o parser, o analisador semântico e o tradutor. O principal resultado deste
trabalho é um motor de reconfiguração, que usa as especificações ReCooPLa convenientemente
traduzidas em código Java e aplica-as a estruturas de coordenação.
Este projeto Ă© parte de uma iniciativa de pesquisa mais ampla que visa modelar e analisar formalmente
reconfigurações de padrões de coordenação no contexto de SOA e cloud-computing
Verification and Application of Program Transformations
A programtranszformáciĂł Ă©s a refaktorálás alapvetĹ‘ elemei a szoftverfejlesztĂ©si folyamatnak. A refaktorálást a kezdetektĹ‘l prĂłbálják szoftvereszközökkel támogatni, amelyek megbĂzhatĂłan Ă©s hatĂ©konyan valĂłsĂtják meg a szoftverminĹ‘sĂ©get javĂtĂł, a működĂ©st nem Ă©rintĹ‘ programtranszformáciĂłkat. A statikus elemzĂ©sre alapulĂł hibakeresĂ©s Ă©s a refaktorálási transzformáciĂłk az akadĂ©miában Ă©s a kutatás-fejlesztĂ©sben is nagy Ă©rdeklĹ‘dĂ©sre tartanak számot, ám mĂ©g ennĂ©l is fontosabb a szerepĂĽk a nagy bonyolultságĂş szoftvereket kĂ©szĂtĹ‘ vállalatoknál. Egyre pontosabbak Ă©s megbĂzhatĂłbbak a szoftverfejlesztĂ©st támogatĂł eszközök, de bĹ‘ven van mĂ©g min javĂtani.
A disszertáciĂł olyan definĂciĂłs Ă©s verifikáciĂłs mĂłdszereket tárgyal, amelyekkel megbĂzhatĂłbb Ă©s szĂ©lesebb körben használt programtranszformáciĂłs eszközöket tudunk kĂ©szĂteni. A dolgozat a statikus Ă©s a dinamikus verifikáciĂłt is Ă©rinti. ElsĹ‘kĂ©nt egy Ăşjszerű, tömör leĂrĂł nyelvet mutat be L-attribĂştum grammatikákhoz, amelyet tulajdonságalapĂş tesztelĂ©shez használt vĂ©letlenszerű adatgenerátorra kĂ©pezĂĽnk le. Ehhez egy esettanulmány társul, amely az Erlang programozási nyelv grammatikáját, majd a tesztelĂ©shez valĂł felhasználását mutatja be. A tesztelĂ©s mellett a formális helyessĂ©gbizonyĂtás kĂ©rdĂ©sĂ©t is vizsgáljuk, ehhez bevezetĂĽnk egy refaktorálások leĂrására szolgálĂł nyelvet, amelyben vĂ©grehajthatĂł Ă©s automatikusan bizonyĂthatĂł specifikáciĂłkat tudunk megadni. A nyelv környezetfĂĽggĹ‘ Ă©s feltĂ©teles termátĂráson, stratĂ©giákon Ă©s Ăşgynevezett refaktorálási sĂ©mákon alapszik. VĂ©gĂĽl, de nem utolsĂł sorban a programtranszformáciĂłk egy speciális alkalmazása kerĂĽl bemutatásra, amikor egy refaktorálĂł keretrendszert elĹ‘fordĂtĂłkĂ©nt használunk a feldolgozott programozási nyelv kiterjesztĂ©sĂ©re. UtĂłbbi mĂłdszerrel könnyen implementálhatĂł az Erlang nyelvben a kĂłdmigráciĂł
- …