144 research outputs found
Semiring-based Specification Approaches for Quantitative Security
Our goal is to provide different semiring-based formal tools for the specification of security requirements: we quantitatively enhance the open-system approach, according to which a system is partially specified. Therefore, we suppose the existence of an unknown and possibly malicious agent that interacts in parallel with the system. Two specification frameworks are designed along two different (but still related) lines. First, by comparing the behaviour of a system with the expected one, or by checking if such system satisfies some security requirements: we investigate a novel approximate behavioural-equivalence for comparing processes behaviour, thus extending the Generalised Non Deducibility on Composition (GNDC) approach with scores. As a second result, we equip a modal logic with semiring values with the purpose to have a weight related to the satisfaction of a formula that specifies some requested property. Finally, we generalise the classical partial model-checking function, and we name it as quantitative partial model-checking in such a way to point out the necessary and sufficient conditions that a system has to satisfy in order to be considered as secure, with respect to a fixed security/functionality threshold-value
GraphBLAST: A High-Performance Linear Algebra-based Graph Framework on the GPU
High-performance implementations of graph algorithms are challenging to
implement on new parallel hardware such as GPUs because of three challenges:
(1) the difficulty of coming up with graph building blocks, (2) load imbalance
on parallel hardware, and (3) graph problems having low arithmetic intensity.
To address some of these challenges, GraphBLAS is an innovative, on-going
effort by the graph analytics community to propose building blocks based on
sparse linear algebra, which will allow graph algorithms to be expressed in a
performant, succinct, composable and portable manner. In this paper, we examine
the performance challenges of a linear-algebra-based approach to building graph
frameworks and describe new design principles for overcoming these bottlenecks.
Among the new design principles is exploiting input sparsity, which allows
users to write graph algorithms without specifying push and pull direction.
Exploiting output sparsity allows users to tell the backend which values of the
output in a single vectorized computation they do not want computed.
Load-balancing is an important feature for balancing work amongst parallel
workers. We describe the important load-balancing features for handling graphs
with different characteristics. The design principles described in this paper
have been implemented in "GraphBLAST", the first high-performance linear
algebra-based graph framework on NVIDIA GPUs that is open-source. The results
show that on a single GPU, GraphBLAST has on average at least an order of
magnitude speedup over previous GraphBLAS implementations SuiteSparse and GBTL,
comparable performance to the fastest GPU hardwired primitives and
shared-memory graph frameworks Ligra and Gunrock, and better performance than
any other GPU graph framework, while offering a simpler and more concise
programming model.Comment: 50 pages, 14 figures, 14 table
There are Two Sides to Every Question - Controller Versus Attacker.
We investigate security enforcement mechanisms that run in parallel with a system; the aim is to check and modify the run-time behaviour of a possible attacker in order to guarantee that the system satisfies some security policies. We focus on a CSP-like quantitative process-algebra to model such processes. Weights on actions are modelled with semirings, which represent a parametric structure where to cast different metrics. The basic tools are represented by a quantitative logic and a model checking function. First, the behaviour of the system is removed from the parallel computation with respect to some security property to be satisfied. Secondly, what remains is refined in two formulas with respect to the given operator executed by a controller. The result describes what a controller has to do to prevent a given attack
Search-Based Regular Expression Inference on a GPU
Regular expression inference (REI) is a supervised machine learning and
program synthesis problem that takes a cost metric for regular expressions, and
positive and negative examples of strings as input. It outputs a regular
expression that is precise (i.e., accepts all positive and rejects all negative
examples), and minimal w.r.t. to the cost metric. We present a novel algorithm
for REI over arbitrary alphabets that is enumerative and trades off time for
space. Our main algorithmic idea is to implement the search space of regular
expressions succinctly as a contiguous matrix of bitvectors. Collectively, the
bitvectors represent, as characteristic sequences, all sub-languages of the
infix-closure of the union of positive and negative examples. Mathematically,
this is a semiring of (a variant of) formal power series. Infix-closure enables
bottom-up compositional construction of larger from smaller regular expressions
using the operations of our semiring. This minimises data movement and
data-dependent branching, hence maximises data-parallelism. In addition, the
infix-closure remains unchanged during the search, hence search can be staged:
first pre-compute various expensive operations, and then run the compute
intensive search process. We provide two C++ implementations, one for general
purpose CPUs and one for Nvidia GPUs (using CUDA). We benchmark both on Google
Colab Pro: the GPU implementation is on average over 1000x faster than the CPU
implementation on the hardest benchmarks
- …