768 research outputs found

    Mining Butterflies in Streaming Graphs

    Get PDF
    This thesis introduces two main-memory systems sGrapp and sGradd for performing the fundamental analytic tasks of biclique counting and concept drift detection over a streaming graph. A data-driven heuristic is used to architect the systems. To this end, initially, the growth patterns of bipartite streaming graphs are mined and the emergence principles of streaming motifs are discovered. Next, the discovered principles are (a) explained by a graph generator called sGrow; and (b) utilized to establish the requirements for efficient, effective, explainable, and interpretable management and processing of streams. sGrow is used to benchmark stream analytics, particularly in the case of concept drift detection. sGrow displays robust realization of streaming growth patterns independent of initial conditions, scale and temporal characteristics, and model configurations. Extensive evaluations confirm the simultaneous effectiveness and efficiency of sGrapp and sGradd. sGrapp achieves mean absolute percentage error up to 0.05/0.14 for the cumulative butterfly count in streaming graphs with uniform/non-uniform temporal distribution and a processing throughput of 1.5 million data records per second. The throughput and estimation error of sGrapp are 160x higher and 0.02x lower than baselines. sGradd demonstrates an improving performance over time, achieves zero false detection rates when there is not any drift and when drift is already detected, and detects sequential drifts in zero to a few seconds after their occurrence regardless of drift intervals

    Evaluation Methodologies in Software Protection Research

    Full text link
    Man-at-the-end (MATE) attackers have full control over the system on which the attacked software runs, and try to break the confidentiality or integrity of assets embedded in the software. Both companies and malware authors want to prevent such attacks. This has driven an arms race between attackers and defenders, resulting in a plethora of different protection and analysis methods. However, it remains difficult to measure the strength of protections because MATE attackers can reach their goals in many different ways and a universally accepted evaluation methodology does not exist. This survey systematically reviews the evaluation methodologies of papers on obfuscation, a major class of protections against MATE attacks. For 572 papers, we collected 113 aspects of their evaluation methodologies, ranging from sample set types and sizes, over sample treatment, to performed measurements. We provide detailed insights into how the academic state of the art evaluates both the protections and analyses thereon. In summary, there is a clear need for better evaluation methodologies. We identify nine challenges for software protection evaluations, which represent threats to the validity, reproducibility, and interpretation of research results in the context of MATE attacks

    Representing Guardedness in Call-By-Value

    Get PDF
    Like the notion of computation via (strong) monads serves to classify various flavours of impurity, including exceptions, non-determinism, probability, local and global store, the notion of guardedness classifies well-behavedness of cycles in various settings. In its most general form, the guardedness discipline applies to general symmetric monoidal categories and further specializes to Cartesian and co-Cartesian categories, where it governs guarded recursion and guarded iteration respectively. Here, even more specifically, we deal with the semantics of call-by-value guarded iteration. It was shown by Levy, Power and Thielecke that call-by-value languages can be generally interpreted in Freyd categories, but in order to represent effectful function spaces, such a category must canonically arise from a strong monad. We generalize this fact by showing that representing guarded effectful function spaces calls for certain parametrized monads (in the sense of Uustalu). This provides a description of guardedness as an intrinsic categorical property of programs, complementing the existing description of guardedness as a predicate on a category

    A Core Calculus for Equational Proofs of Cryptographic Protocols

    Get PDF
    International audienceMany proofs of interactive cryptographic protocols (e.g., as in Universal Composability) operate by proving the protocol at hand to be observationally equivalent to an idealized specification. While pervasive, formal tool support for observational equivalence of cryptographic protocols is still a nascent area of research. Current mechanization efforts tend to either focus on diff-equivalence, which establishes observational equivalence between protocols with identical control structures, or require an explicit witness for the observational equivalence in the form of a bisimulation relation. Our goal is to simplify proofs for cryptographic protocols by introducing a core calculus, IPDL, for cryptographic observational equivalences. Via IPDL, we aim to address a number of theoretical issues for cryptographic proofs in a simple manner, including probabilistic behaviors, distributed message-passing, and resource-bounded adversaries and simulators. We demonstrate IPDL on a number of case studies, including a distributed coin toss protocol, Oblivious Transfer, and the GMW multi-party computation protocol. All proofs of case studies are mechanized via an embedding of IPDL into the Coq proof assistant

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Linear-Time Temporal Answer Set Programming

    Get PDF
    [Abstract]: In this survey, we present an overview on (Modal) Temporal Logic Programming in view of its application to Knowledge Representation and Declarative Problem Solving. The syntax of this extension of logic programs is the result of combining usual rules with temporal modal operators, as in Linear-time Temporal Logic (LTL). In the paper, we focus on the main recent results of the non-monotonic formalism called Temporal Equilibrium Logic (TEL) that is defined for the full syntax of LTL but involves a model selection criterion based on Equilibrium Logic, a well known logical characterization of Answer Set Programming (ASP). As a result, we obtain a proper extension of the stable models semantics for the general case of temporal formulas in the syntax of LTL. We recall the basic definitions for TEL and its monotonic basis, the temporal logic of Here-and-There (THT), and study the differences between finite and infinite trace length. We also provide further useful results, such as the translation into other formalisms like Quantified Equilibrium Logic and Second-order LTL, and some techniques for computing temporal stable models based on automata constructions. In the remainder of the paper, we focus on practical aspects, defining a syntactic fragment called (modal) temporal logic programs closer to ASP, and explaining how this has been exploited in the construction of the solver telingo, a temporal extension of the well-known ASP solver clingo that uses its incremental solving capabilities.Xunta de Galicia; ED431B 2019/03We are thankful to the anonymous reviewers for their thorough work and their useful suggestions that have helped to improve the paper. A special thanks goes to Mirosaw Truszczy´nski for his support in improving the quality of our paper. We are especially grateful to David Pearce, whose help and collaboration on Equilibrium Logic was the seed for a great part of the current paper. This work was partially supported by MICINN, Spain, grant PID2020-116201GB-I00, Xunta de Galicia, Spain (GPC ED431B 2019/03), R´egion Pays de la Loire, France, (projects EL4HC and etoiles montantes CTASP), European Union COST action CA-17124, and DFG grants SCHA 550/11 and 15, Germany

    Semantics for Noninterference with Interaction Trees

    Get PDF
    Noninterference is the strong information-security property that a program does not leak secrets through publicly-visible behavior. In the presence of effects such as nontermination, state, and exceptions, reasoning about noninterference quickly becomes subtle. We advocate using interaction trees (ITrees) to provide compositional mechanized proofs of noninterference for multi-language, effectful, nonterminating programs, while retaining executability of the semantics. We develop important foundations for security analysis with ITrees: two indistinguishability relations, leading to two standard notions of noninterference with adversaries of different strength, along with metatheory libraries for reasoning about each. We demonstrate the utility of our results using a simple imperative language with embedded assembly, along with a compiler into that assembly language

    PureCake: A verified compiler for a lazy functional language

    Get PDF
    We present PureCake, a mechanically-verified compiler for PureLang, a lazy, purely functional programming language with monadic effects. PureLang syntax is Haskell-like and indentation-sensitive, and its constraint-based Hindley-Milner type system guarantees safe execution. We derive sound equational reasoning principles over its operational semantics, dramatically simplifying some proofs. We prove end-to-end correctness for the compilation of PureLang down to machine code---the first such result for any lazy language---by targeting CakeML and composing with its verified compiler. Multiple optimisation passes are necessary to handle realistic lazy idioms effectively. We develop PureCake entirely within the HOL4 interactive theorem prover

    Evaluating verification awareness as a method for assessing adaptation risk

    Get PDF
    Self-integration requires a system to be self-aware and self-protecting of its functionality and communication processes to mitigate interference in accomplishing its goals. Incorporating self-protection into a framework for reasoning about compliance with critical requirements is a major challenge when the system’s operational environment may have uncertainties resulting in runtime changes. The reasoning should be over a range of impacts and tradeoffs in order for the system to immediately address an issue, even if only partially or imperfectly. Assuming that critical requirements can be formally specified and embedded as part of system self-awareness, runtime verification often involves extensive on-board resources and state explosion, with minimal explanation of results. Model-checking partially mitigates runtime verification issues by abstracting the system operations and architecture. However, validating the consistency of a model given a runtime change is generally performed external to the system and translated back to the operational environment, which can be inefficient.This paper focuses on codifying and embedding verification awareness into a system. Verification awareness is a type of self-awareness related to reasoning about compliance with critical properties at runtime when a system adaptation is needed. The premise is that an adaptation that interferes with a design-time proof process for requirement compliance increases the risk that the original proof process cannot be reused. The greater the risk to limiting proof process reuse, the higher the probability that the requirement would be violated by the adaptation. The application of Rice’s 1953 theorem to this domain indicates that determining whether a given adaptation inherently inhibits proof reuse is undecidable, suggesting the heuristic, comparative approach based on proof meta-data that is part of our approach. To demonstrate our deployment of verification awareness, we predefine four adaptations that are all available to three distinct wearable simulations (stress, insulin delivery, and hearables). We capture meta-data from applying automated theorem proving to wearable requirements and assess the risk among the four adaptations for limiting the proof process reuse for each of their requirements. The results show that the adaptations affect proof process reuse differently on each wearable. We evaluate our reasoning framework by embedding checkpoints on requirement compliance within the wearable code and log the execution trace of each adaptation. The logs confirm that the adaptation selected by each wearable with the lowest risk of inhibiting proof process reuse for its requirements also causes the least number of requirement failures in execution.Computer Scienc
    • …
    corecore