185 research outputs found

    Incremental Semantic Evaluation for Interactive Systems: Inertia, Pre-emption, and Relations

    Get PDF
    Although schemes for incremental semantic evaluation have been explored and refined for more than two decades, the demands of user interaction continue to outstrip the capabilities of these schemes. The feedback produced by a semantic evaluator must support the user's programming activities: it must be structured in a way that provides the user with meaningful insight into the program (directly, or via other tools in the environment) and it must be timely. In this paper we extend an incremental attribute evaluation scheme with three techniques to better meet these demands within the context of a modeless editing system with a flexible tool integration paradigm. Efficient evaluation in the presence of syntax errors (which arise often under modeless editing) is supported by giving semantic attributes inertia: a tendency to not change unless necessary. Pre-emptive evaluation helps to reduce the delays associated with a sequence of edits, allowing an evaluator to "keep pace" with the user. Relations provide a general means to capture semantic structure (for the user, other tools, and as attributes within an evaluation) and are treated efficiently using a form of differential propagation. The combination of these three techniques meets the demands of user interaction; leaving out any one does not

    Incremental rewriting

    Get PDF

    Grammar-based fuzzing using input features

    Get PDF
    In grammar-based fuzz testing, a formal grammar is used to produce test inputs that are syntactically valid in order to reach the business logic of a program under test. In this setting, it is advantageous to ensure a high diversity of inputs to test more of the program's behavior. How can we characterize features that make inputs diverse and associate them with the execution of particular parts of the program? Previous work does not answer this question to satisfaction, with most attempts mainly considering superficial features defined by the structure of the grammar such as the presence of production rules or terminal symbols, regardless of their context. We present a measure of input coverage called k-path coverage, which takes into account combinations of grammar entities up to a given context depth k, and makes it possible to efficiently express, assess, and achieve input diversity. In a series of experiments, we demonstrate and evaluate how to systematically attain k-path coverage, how it correlates with code coverage and can thus be used as its predictor. By automatically inferring explicit associations between k-path features and the coverage of individual methods we further show how to generate inputs that specifically target the execution of given code locations. We expect the presented instrument of k-paths to prove useful in numerous additional applications such as assessing the quality of grammars, serving as an adequacy criterion for input test suites, enabling test case prioritization, facilitating program comprehension, and perhaps beyond.Im Bereich des grammatik-basierten Fuzz-Testens benutzt man eine formale Grammatik, um Testeingaben zu produzieren, welche syntaktisch korrekt sind, mit dem Ziel die Geschäftslogik eines zu testenden Programms zu erreichen. Dafür ist es vorteilhaft eine hohe Diversität der Eingaben zu sichern, um mehr vom Verhalten des Programms testen zu können. Wie kann man Merkmale charakterisieren, die Eingaben vielfältig machen und diese mit der Ausführung bestimmter Programmteile in Verbindung bringen? Bisherige Ansätze liefern darauf keine ausreichende Antwort, denn meistens betrachten sie oberflächliche, durch die Grammatikstruktur definierte Merkmale, wie das Vorhandensein von Produktionsregeln oder Terminalen, unabhängig von ihrem Verwendungskontext. Wir präsentieren ein Maß für Eingabeabdeckung, genannt -path Abdeckung, welche Kombinationen von Grammatikelementen bis zu einer vorgegebenen Kontexttiefe berücksichtigt und es ermöglicht, die Diversität von Eingaben effizient auszudrücken, zu bewerten und zu erzielen. Mit Experimenten zeigen und evaluieren wir, wie man gezielt -path Abdeckung erreicht und wie sie mit der Codeabdeckung zusammenhängt und diese somit vorhersagen kann. Ferner zeigen wir wie automatisches Erlernen expliziter Assoziationen zwischen Merkmalen und der Abdeckung einzelner Methoden die Erzeugung von Eingaben ermöglicht, welche auf die Ausführung bestimmter Codestellen abzielen. Wir rechnen damit, dass sich -paths als ein vielseitiges Instrument beweisen, dessen Anwendung über solche Gebiete, wie z.B. Messung der Qualität von Grammatiken und Eingabe-Testsuiten, Testfallpriorisierung, oder Erleichterung von Programmverständnis, hinausgeht

    Protecting Systems From Exploits Using Language-Theoretic Security

    Get PDF
    Any computer program processing input from the user or network must validate the input. Input-handling vulnerabilities occur in programs when the software component responsible for filtering malicious input---the parser---does not perform validation adequately. Consequently, parsers are among the most targeted components since they defend the rest of the program from malicious input. This thesis adopts the Language-Theoretic Security (LangSec) principle to understand what tools and research are needed to prevent exploits that target parsers. LangSec proposes specifying the syntactic structure of the input format as a formal grammar. We then build a recognizer for this formal grammar to validate any input before the rest of the program acts on it. To ensure that these recognizers represent the data format, programmers often rely on parser generators or parser combinators tools to build the parsers. This thesis propels several sub-fields in LangSec by proposing new techniques to find bugs in implementations, novel categorizations of vulnerabilities, and new parsing algorithms and tools to handle practical data formats. To this end, this thesis comprises five parts that tackle various tenets of LangSec. First, I categorize various input-handling vulnerabilities and exploits using two frameworks. First, I use the mismorphisms framework to reason about vulnerabilities. This framework helps us reason about the root causes leading to various vulnerabilities. Next, we built a categorization framework using various LangSec anti-patterns, such as parser differentials and insufficient input validation. Finally, we built a catalog of more than 30 popular vulnerabilities to demonstrate the categorization frameworks. Second, I built parsers for various Internet of Things and power grid network protocols and the iccMAX file format using parser combinator libraries. The parsers I built for power grid protocols were deployed and tested on power grid substation networks as an intrusion detection tool. The parser I built for the iccMAX file format led to several corrections and modifications to the iccMAX specifications and reference implementations. Third, I present SPARTA, a novel tool I built that generates Rust code that type checks Portable Data Format (PDF) files. The type checker I helped build strictly enforces the constraints in the PDF specification to find deviations. Our checker has contributed to at least four significant clarifications and corrections to the PDF 2.0 specification and various open-source PDF tools. In addition to our checker, we also built a practical tool, PDFFixer, to dynamically patch type errors in PDF files. Fourth, I present ParseSmith, a tool to build verified parsers for real-world data formats. Most parsing tools available for data formats are insufficient to handle practical formats or have not been verified for their correctness. I built a verified parsing tool in Dafny that builds on ideas from attribute grammars, data-dependent grammars, and parsing expression grammars to tackle various constructs commonly seen in network formats. I prove that our parsers run in linear time and always terminate for well-formed grammars. Finally, I provide the earliest systematic comparison of various data description languages (DDLs) and their parser generation tools. DDLs are used to describe and parse commonly used data formats, such as image formats. Next, I conducted an expert elicitation qualitative study to derive various metrics that I use to compare the DDLs. I also systematically compare these DDLs based on sample data descriptions available with the DDLs---checking for correctness and resilience

    An engine for coordination-based architectural reconfigurations

    Get PDF
    Master Course in Computing EngineeringIn service-oriented architectures (SOA), services are seen as loosely-coupled components interacting with each other via connection of their public interfaces. Such interaction follows a (coordination) protocol usually established at design-time. However, in an environment where change is the rule rather than the exception, several aspects may contribute to a need for change in the way these services interact. To perceive the consequences of applying these changes beforehand is an ultimate requirement for SOA design. The dissertation of this MSc project proposes a practical approach to model reconfigurations of service coordination patterns. Its main contributions are a language for coordination reconfiguration design and a reconfiguration engine. This project is the part of a broader research initiative aiming at formally modelling, reasoning and analysing reconfigurations of coordination patterns in the context of SOA and cloud-computing.Em arquiteturas orientadas a serviços (SOA), os serviços são vistos como componentes independentes que interagem uns com os outros através da ligação das suas interfaces públicas. Tal interação segue um protocolo (de coordenação) que normalmente é estabelecido durante o design. No entanto, num ambiente onde a mudança é a regra e não a excepção, vários factores podem contribuir para uma necessidade de alterar a forma como estes serviços interagem. Compreender as consequências da aplicação destas alterações com antecedência é uma exigência final para o desenho de uma SOA. Esta dissertação de mestrado propõe uma abordagem prática para modelar reconfigurações de padrões de coordenação de serviços. Para tal, as reconfigurações são especificadas (antes de serem aplicadas em tempo de execução) através de uma linguagem de domínio específico – ReCooPLa – que visa a manipulação de estruturas de coordenação de software, tipicamente utilizadas em SOA. Posteriormente, é apresentado um processador para a linguagem, construído de acordo com a abordagem tradicional para a construção de compiladores. Este processador inclui o parser, o analisador semântico e o tradutor. O principal resultado deste trabalho é um motor de reconfiguração, que usa as especificações ReCooPLa convenientemente traduzidas em código Java e aplica-as a estruturas de coordenação. Este projeto é parte de uma iniciativa de pesquisa mais ampla que visa modelar e analisar formalmente reconfigurações de padrões de coordenação no contexto de SOA e cloud-computing

    Verification and Application of Program Transformations

    Get PDF
    A programtranszformáció és a refaktorálás alapvető elemei a szoftverfejlesztési folyamatnak. A refaktorálást a kezdetektől próbálják szoftvereszközökkel támogatni, amelyek megbízhatóan és hatékonyan valósítják meg a szoftverminőséget javító, a működést nem érintő programtranszformációkat. A statikus elemzésre alapuló hibakeresés és a refaktorálási transzformációk az akadémiában és a kutatás-fejlesztésben is nagy érdeklődésre tartanak számot, ám még ennél is fontosabb a szerepük a nagy bonyolultságú szoftvereket készítő vállalatoknál. Egyre pontosabbak és megbízhatóbbak a szoftverfejlesztést támogató eszközök, de bőven van még min javítani. A disszertáció olyan definíciós és verifikációs módszereket tárgyal, amelyekkel megbízhatóbb és szélesebb körben használt programtranszformációs eszközöket tudunk készíteni. A dolgozat a statikus és a dinamikus verifikációt is érinti. Elsőként egy újszerű, tömör leíró nyelvet mutat be L-attribútum grammatikákhoz, amelyet tulajdonságalapú teszteléshez használt véletlenszerű adatgenerátorra képezünk le. Ehhez egy esettanulmány társul, amely az Erlang programozási nyelv grammatikáját, majd a teszteléshez való felhasználását mutatja be. A tesztelés mellett a formális helyességbizonyítás kérdését is vizsgáljuk, ehhez bevezetünk egy refaktorálások leírására szolgáló nyelvet, amelyben végrehajtható és automatikusan bizonyítható specifikációkat tudunk megadni. A nyelv környezetfüggő és feltételes termátíráson, stratégiákon és úgynevezett refaktorálási sémákon alapszik. Végül, de nem utolsó sorban a programtranszformációk egy speciális alkalmazása kerül bemutatásra, amikor egy refaktoráló keretrendszert előfordítóként használunk a feldolgozott programozási nyelv kiterjesztésére. Utóbbi módszerrel könnyen implementálható az Erlang nyelvben a kódmigráció
    • …
    corecore