44 research outputs found

    Nfer capsule demonstrating command-line use with the LANL spec

    No full text
    Nfer is a tool that implements the eponymous language for log analysis and monitoring. Users write rules to calculate new information from an event stream such as a program log either offline or online. In addition to a command-line program, nfer exposes interfaces in Python and R and can generate monitors for embedded systems. Nfer is designed to be fast and has been repeatedly demonstrated to outperform similar tools

    Parametric Verification of Weighted Systems

    No full text
    This paper addresses the problem of parametric model checking for weighted transition systems. We consider transition systems labelled with linear equations over a set of parameters and we use them to provide semantics for a parametric version of weighted CTL where the until and next operators are themselves indexed with linear equations. The parameters change the model-checking problem into a problem of computing a linear system of inequalities that characterizes the parameters that guarantee the satisfiability. To address this problem, we use parametric dependency graphs (PDGs) and we propose a global update function that yields an assignment to each node in a PDG. For an iterative application of the function, we prove that a fixed point assignment to PDG nodes exists and the set of assignments constitutes a well-quasi ordering, thus ensuring that the fixed point assignment can be found after finitely many iterations. To demonstrate the utility of our technique, we have implemented a prototype tool that computes the constraints on parameters for model checking problems

    Artifact for "Teaching Stratego to Play Ball : Optimal Synthesis for Continuous Space MDPs"

    No full text
    A zip-file containing the artifact, models and scripts for reproducing the results of the paper "Teaching Stratego to Play Ball : Optimal Synthesis for Continuous Space MDPs" accepted at ATVA'19. The Artifact Evaluation Package is configured to match the Virtual Machine provided: 10.5281/zenodo.2759473 A guide of the experiments can be found in README.html

    Open- and Closed-Loop Neural Network Verification using Polynomial Zonotopes

    No full text
    This capsule reproduces the results presented in the paper "Open- and Closed-Loop Neural Network Verification using Polynomial Zonotopes"

    Code, Benchmarks, and Data of Faster Stackelberg Planning via Symbolic Search and Information Sharing

    No full text
    The latest version of this repository is: https://gitlab.com/atorralba_planners/stackelberg-planner-sls Installation ================== To build the tool, navigate to the /src folder and execute: ./build_all Usage ================== ./fast-downward.py --search The recommended "default" configurations are: * For easy instances: "sym_stackelberg(optimal_engine=symbolic(plan_reuse_minimal_task_upper_bound=false, plan_reuse_upper_bound=true), upper_bound_pruning=false)" * For harder instances: "sym_stackelberg(optimal_engine=symbolic(plan_reuse_minimal_task_upper_bound=true, plan_reuse_upper_bound=true, force_bw_search_minimum_task_seconds=30, time_limit_seconds_minimum_task=300), upper_bound_pruning=true)" The difference is weather you activate upper bound pruning, which requires some pre-processing. You may control the amount of pre-processing with the time limits: force_bw_search_minimum_task_seconds and time_limit_seconds_minimum_task For the net-benefit planning variant use ./fast-downward.py --translate-options --soft 10000 --search-options --search This will set the reward for each individual goal to 10000 units of cost. PDDL Format ================== The set of actions has to be divided in leader and follower actions. To specify this in PDDL we simply adopt the following convention: * Leader actions have a name that starts with fix_ * Follower actions have a name that starts with attack_ Note: this naming convention comes from a pentesting context where leader actions fix vulnerabilities in a network and the follower "attacks" the network by exploiting the remaining vulnerabilities. Benchmarks ================== The benchmarks folder contains the benchmarks used to evaluate the algorithms. We used the following nomenclature: * rs42: a random seed of 42 was used to select which subset of actions is available for the leader. * tcX: where X is a number that specifies how many of the follower's actions the leader can disable. * -driving-: Benchmarks containing the word driving, the leader needs to move along the network in order to disable follower's actions. Experiments ================== The experiments/aaai21 folder contains the scripts used to run the experiments: * lab_parser.py: parses the output of the planner. * configs: all the configurations used for the experiments * create_configs.py: creates lab scripts, one for each config and puts them into a folder * run_scripts.sh: executes all lab scripts within the folder created by create_configs.py * report.py: fetches all results from all runs of all configs and re-writes some attributes for the scripts, the resulting properties file should be provided to paper-tables.py and paper-tables-soft-goals.py * paper-tables.py: Used to generate some of the plots in the paper, the resulting properties file should be provided to coverage-report.py * coverage-report.py: prints the coverage table included in the paper * paper-tables-soft-goals.py: Used to generate the plots in the paper that compare New vs Net benchmarks The properties file provided is the one that was gathered by report.py, before being processed by paper-tables.p

    Reproducibility Package for Extended Abstract Dependency Graphs.

    No full text
    This is a reproducibility package for the STTT paper "Extended Abstract Dependency Graphs". It contains scripts, models and binaries (for Linux) to reproduce results . The CTL folder contains everything pertaining to CTL results that are new for the journal version. The CCS_and_WCTL folder contains CCS and WCTL results. Further instructions are in both folders

    Repeatability Package for "Differential Testing of Pushdown Reachability with a Formally Verified Oracle"

    No full text
    Repeatability package for the paper "Differential Testing of Pushdown Reachability with a Formally Verified Oracle" accepted at FMCAD 2022. This package contains the Isabelle formalization as well as the experimental setup for the case study in the paper - including scripts, benchmarks, as well as source code and executables for the different versions of the PDAAAL library studied.  The experimental setup is only available for Linux, while the Isabelle formalization can be used on a platform where Isabelle can be installed. See the README.txt files for more details

    Code and data for construction of a network architecture for Multi-Multi-Instance learning

    No full text
    This tar.gz file contains the replica of the author's GitHub repository which includes the data used for running the experiments on papers. Due to copyright restrictions two of the three graph citation networks are not in included. Please contact the author via email: [email protected] for further information. The dataset holds:1. data and code files used to construct a multi-multi instance semi-synthetic dataset from the MNIST database of handwritten digits, available here: http://yann.lecun.com/exdb/mnist/; a training set of 60,000 examples, and a test set of 10,000 examples. Digits are organized in bags-of-bags of arbitrary cardinality. 2. data, image and code files used to construct a semi-synthetic dataset from MNIST, placing digits randomly into a background images of black pixels.3. example real citation network datasets where data can be naturally decomposed into bags-of-bags (MMI data) or bags (MI data). mmi.tar.gz can be uncompressed using standard compression utilities. Code files are in Python .py, .npy, .pyc formats, Linux shell executable files .sh and .json format, openly accessible via text edit software. README and other metadata files are provided in .md markdown language, .meta and .txt format at various levels of the folder architecture. Image files are provided in .idx3-ubyte file type: a simple format for vectors and multidimensional matrices of various numerical types. mmi Installation mmi uses the following dependencies: - numpy- TensorFlow -See installation instructions: https://www.tensorflow.org/install/ To install mmi, run the install command: python setup.py install You can also install mmmi from PyPI pip install mmi Run the demo Enter example_mnist folder and run python train_mmi_mnist.py BackgroundIn the associated paper, we study an extension of the multi-instance learning problem where examples are organized as nested bags of instances (e.g., a document could be represented as a bag of sentences, which in turn are bags of words). This framework can be useful in various scenarios, such as graph classification, image classification and translation-invariant pooling in convolutional neural network. In order to learn multi-multi instance data, we introduce a special neural network layer, called bag-layer, whose units aggregate sets of inputs of arbitrary size. We prove that the associated class of functions contains all Boolean functions over sets of sets of instances. We present empirical results on semi-synthetic data showing that such class of functions can be actually learned from data. We also present experiments on citation graphs datasets where our model obtains competitive results
    corecore