1,145 research outputs found
CodeTrolley: Hardware-Assisted Control Flow Obfuscation
Many cybersecurity attacks rely on analyzing a binary executable to find
exploitable sections of code. Code obfuscation is used to prevent attackers
from reverse engineering these executables. In this work, we focus on control
flow obfuscation - a technique that prevents attackers from statically
determining which code segments are original, and which segments are added in
to confuse attackers. We propose a RISC-V-based hardware-assisted deobfuscation
technique that deobfuscates code at runtime based on a secret safely stored in
hardware, along with an LLVM compiler extension for obfuscating binaries.
Unlike conventional tools, our work does not rely on compiling
hard-to-reverse-engineer code, but on securing a secret key. As such, it can be
seen as a lightweight alternative to on-the-fly binary decryption.Comment: 2019 Boston Area Architecture Workshop (BARC'19
Theorem and Algorithm Checking for Courses on Logic and Formal Methods
The RISC Algorithm Language (RISCAL) is a language for the formal modeling of
theories and algorithms. A RISCAL specification describes an infinite class of
models each of which has finite size; this allows to fully automatically check
in such a model the validity of all theorems and the correctness of all
algorithms. RISCAL thus enables us to quickly verify/falsify the specific truth
of propositions in sample instances of a model class before attempting to prove
their general truth in the whole class: the first can be achieved in a fully
automatic way while the second typically requires our assistance. RISCAL has
been mainly developed for educational purposes. To this end this paper reports
on some new enhancements of the tool: the automatic generation of checkable
verification conditions from algorithms, the visualization of the execution of
procedures and the evaluation of formulas illustrating the computation of their
results, and the generation of Web-based student exercises and assignments from
RISCAL specifications. Furthermore, we report on our first experience with
RISCAL in the teaching of courses on logic and formal methods and on further
plans to use this tool to enhance formal education.Comment: In Proceedings ThEdu'18, arXiv:1903.1240
Applications of Quantified Constraint Solving over the Reals - Bibliography
Quantified constraints over the reals appear in numerous contexts. Usually
existential quantification occurs when some parameter can be chosen by the user
of a system, and univeral quantification when the exact value of a parameter is
either unknown, or when it occurs in infinitely many, similar versions. The
following is a list of application areas and publications that contain
applications for solving quantified constraints over the reals. The list is
certainly not complete, but grows as the author encounters new items.
Contributions are very welcome
PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform
Computing with high-dimensional (HD) vectors, also referred to as
, is a brain-inspired alternative to computing with
scalars. Key properties of HD computing include a well-defined set of
arithmetic operations on hypervectors, generality, scalability, robustness,
fast learning, and ubiquitous parallel operations. HD computing is about
manipulating and comparing large patterns-binary hypervectors with 10,000
dimensions-making its efficient realization on minimalistic ultra-low-power
platforms challenging. This paper describes HD computing's acceleration and its
optimization of memory accesses and operations on a silicon prototype of the
PULPv3 4-core platform (1.5mm, 2mW), surpassing the state-of-the-art
classification accuracy (on average 92.4%) with simultaneous 3.7
end-to-end speed-up and 2 energy saving compared to its single-core
execution. We further explore the scalability of our accelerator by increasing
the number of inputs and classification window on a new generation of the PULP
architecture featuring bit-manipulation instruction extensions and larger
number of 8 cores. These together enable a near ideal speed-up of 18.4
compared to the single-core PULPv3
Analysis of a benchmark suite to evaluate mixed numeric and symbolic processing
The suite of programs that formed the benchmark for a proposed advanced computer is described and analyzed. The features of the processor and its operating system that are tested by the benchmark are discussed. The computer codes and the supporting data for the analysis are given as appendices
Abstract State Machines 1988-1998: Commented ASM Bibliography
An annotated bibliography of papers which deal with or use Abstract State
Machines (ASMs), as of January 1998.Comment: Also maintained as a BibTeX file at http://www.eecs.umich.edu/gasm
Recommended from our members
Solving large scale linear programming problems
The interior point method (IPM) is now well established as a computationaly com-petitive scheme for solving very large scale linear programming problems. The leading variant of the IPM is the primal dual predictor corrector algorithm due to Mehrotra. The main computational efforts in this algorithm are the repeated calculation and solution of a large sparse positive definite system of equations.
We describe an implementation of this algorithm for vector processors. At the heart of the implementation is a vectorized matrix multiplication and Cholesky factorization for sparse matrices.
We identify the parts where vectorization can be beneficial and discuss in details the merits of alternative vectorization techniques. We show that the best way to utilize a vector processor is by exploiting dense computation within the sparse framework and by unrolling loop operations. We further present an extended definition of supernodes, and describe an implementation based on this new approach. We show that although this approach requires more memory it can increase the scope of dense computation substantially with out adding extra operations.
Performance results on standard industrial test problems and comparison between an algorithm that utilizes the extended supernodes and one that utilizes standard supernodes are presented and discussed
A multiarchitecture parallel-processing development environment
A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems
Distributed Maple: parallel computer algebra in networked environments
AbstractWe describe the design and use of Distributed Maple, an environment for executing parallel computer algebra programs on multiprocessors and heterogeneous clusters. The system embeds kernels of the computer algebra system Maple as computational engines into a networked coordination layer implemented in the programming language Java. On the basis of a comparatively high-level programming model, one may write parallel Maple programs that show good speedups in medium-scaled environments. We report on the use of the system for the parallelization of various functions of the algebraic geometry library CASA and demonstrate how design decisions affect the dynamic behaviour and performance of a parallel application. Numerous experimental results allow comparison of Distributed Maple with other systems for parallel computer algebra
SymPas: Symbolic Program Slicing
Program slicing is a technique for simplifying programs by focusing on
selected aspects of their behaviour. Current mainstream static slicing methods
operate on the PDG (program dependence graph) or SDG (system dependence graph),
but these friendly graph representations may be expensive and error-prone for
some users. We attempt in this paper to study a light-weight approach of static
program slicing, called Symbolic Program Slicing (SymPas), which works as a
dataflow analysis on LLVM (Low-Level Virtual Machine). In our SymPas approach,
slices are stored symbolically rather than procedure being re-analysed (cf.
procedure summaries). Instead of re-analysing a procedure multiple times to
find its slices for each callling context, SymPas calculates a single symbolic
(or parameterized) slice which can be instantiated at call sites avoiding
re-analysis; it is implemented in LLVM to perform slicing on its intermediate
representation (IR). For comparison, we systematically adapt IFDS
(Interprocedural Finite Distributive Subset) analysis and the SDG-based slicing
method (SDG-IFDS) to statically IR slice programs. Evaluated on open-source and
benchmark programs, our backward SymPas shows a factor-of-6 reduction in time
cost and a factor-of-4 reduction in space cost, compared to backward SDG-IFDS,
thus being more efficient. In addition, the result shows that after studying
slices from 66 programs, ranging up to 336,800 IR instructions in size, SymPas
is highly size-scalable.Comment: 29 pages, 11 figure
- …