91 research outputs found
Certified compilation for cryptography: Extended x86 instructions and constant-time verification
We present a new tool for the generation and verification of high-assurance high-speed machine-level cryptography implementations: a certified C compiler supporting instruction extensions to the x86. We demonstrate the practical applicability of our tool by incorporating it into supercop: a toolkit for measuring the performance of cryptographic software, which includes over 2000 different implementations. We show i. that the coverage of x86 implementations in supercop increases significantly due to the added support of instruction extensions via intrinsics and ii. that the obtained verifiably correct implementations are much closer in performance to unverified ones. We extend our compiler with a specialized type system that acts at pre-assembly level; this is the first constant-time verifier that can deal with extended instruction sets. We confirm that, by using instruction extensions, the performance penalty for verifiably constant-time code can be greatly reduced.This work is financed by National Funds through the FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within the project PTDC/CCI-INF/31698/2017, and by the Norte Portugal Regional Operational Programme (NORTE 2020) under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (ERDF) and also by national funds through the FCT, within project NORTE-01-0145-FEDER-028550 (REASSURE)
Dependently-Typed Formalisation of Typed Term Graphs
We employ the dependently-typed programming language Agda2 to explore
formalisation of untyped and typed term graphs directly as set-based graph
structures, via the gs-monoidal categories of Corradini and Gadducci, and as
nested let-expressions using Pouillard and Pottier's NotSoFresh library of
variable-binding abstractions.Comment: In Proceedings TERMGRAPH 2011, arXiv:1102.226
Format Abstraction for Sparse Tensor Algebra Compilers
This paper shows how to build a sparse tensor algebra compiler that is
agnostic to tensor formats (data layouts). We develop an interface that
describes formats in terms of their capabilities and properties, and show how
to build a modular code generator where new formats can be added as plugins. We
then describe six implementations of the interface that compose to form the
dense, CSR/CSF, COO, DIA, ELL, and HASH tensor formats and countless variants
thereof. With these implementations at hand, our code generator can generate
code to compute any tensor algebra expression on any combination of the
aforementioned formats.
To demonstrate our technique, we have implemented it in the taco tensor
algebra compiler. Our modular code generator design makes it simple to add
support for new tensor formats, and the performance of the generated code is
competitive with hand-optimized implementations. Furthermore, by extending taco
to support a wider range of formats specialized for different application and
data characteristics, we can improve end-user application performance. For
example, if input data is provided in the COO format, our technique allows
computing a single matrix-vector multiplication directly with the data in COO,
which is up to 3.6 faster than by first converting the data to CSR.Comment: Presented at OOPSLA 201
Evaluating the performance of legacy applications on emerging parallel architectures
The gap between a supercomputer's theoretical maximum (\peak")
oatingpoint
performance and that actually achieved by applications has grown wider
over time. Today, a typical scientific application achieves only 5{20% of any
given machine's peak processing capability, and this gap leaves room for significant
improvements in execution times.
This problem is most pronounced for modern \accelerator" architectures
{ collections of hundreds of simple, low-clocked cores capable of executing the
same instruction on dozens of pieces of data simultaneously. This is a significant
change from the low number of high-clocked cores found in traditional CPUs,
and effective utilisation of accelerators typically requires extensive code and
algorithmic changes. In many cases, the best way in which to map a parallel
workload to these new architectures is unclear.
The principle focus of the work presented in this thesis is the evaluation
of emerging parallel architectures (specifically, modern CPUs, GPUs and Intel
MIC) for two benchmark codes { the LU benchmark from the NAS Parallel
Benchmark Suite and Sandia's miniMD benchmark { which exhibit complex
parallel behaviours that are representative of many scientific applications. Using
combinations of low-level intrinsic functions, OpenMP, CUDA and MPI, we
demonstrate performance improvements of up to 7x for these workloads.
We also detail a code development methodology that permits application developers
to target multiple architecture types without maintaining completely
separate implementations for each platform. Using OpenCL, we develop performance
portable implementations of the LU and miniMD benchmarks that are
faster than the original codes, and at most 2x slower than versions highly-tuned
for particular hardware.
Finally, we demonstrate the importance of evaluating architectures at scale
(as opposed to on single nodes) through performance modelling techniques,
highlighting the problems associated with strong-scaling on emerging accelerator
architectures
Fast and Clean: Auditable high-performance assembly via constraint solving
Handwritten assembly is a widely used tool in the development of high-performance cryptography: By providing full control over instruction selection, instruction scheduling, and register allocation, highest performance can be unlocked. On the flip side, developing handwritten assembly is not only time-consuming, but the artifacts produced also tend to be difficult to review and maintain – threatening their suitability for use in practice.
In this work, we present SLOTHY (Super (Lazy) Optimization of Tricky Handwritten assemblY), a framework for the automated superoptimization of assembly with respect to instruction scheduling, register allocation, and loop optimization (software pipelining): With SLOTHY, the developer controls and focuses on algorithm and instruction selection, providing a readable “base” implementation in assembly, while SLOTHY automatically finds optimal and traceable instruction scheduling and register allocation strategies with respect to a model of the target (micro)architecture.
We demonstrate the flexibility of SLOTHY by instantiating it with models of the Cortex-M55, Cortex-M85, Cortex-A55 and Cortex-A72 microarchitectures, implementing the Armv8.1-M+Helium and AArch64+Neon architectures. We use the resulting tools to optimize three workloads: First, for Cortex-M55 and Cortex-M85, a radix-4 complex Fast Fourier Transform (FFT) in fixed-point and floating-point arithmetic, fundamental in Digital Signal Processing. Second, on Cortex-M55, Cortex-M85, Cortex-A55 and Cortex-A72, the instances of the Number Theoretic Transform (NTT) underlying CRYSTALS-Kyber and CRYSTALS-Dilithium, two recently announced winners of the NIST Post-Quantum Cryptography standardization project. Third, for Cortex-A55, the scalar multiplication for the elliptic curve key exchange X25519. The SLOTHY-optimized code matches or beats the performance of prior art in all cases, while maintaining compactness and readability
Towards Improved Homomorphic Encryption for Privacy-Preserving Deep Learning
Mención Internacional en el título de doctorDeep Learning (DL) has supposed a remarkable transformation for many fields, heralded
by some as a new technological revolution. The advent of large scale models has increased
the demands for data and computing platforms, for which cloud computing has become
the go-to solution. However, the permeability of DL and cloud computing are reduced
in privacy-enforcing areas that deal with sensitive data. These areas imperatively call for
privacy-enhancing technologies that enable responsible, ethical, and privacy-compliant
use of data in potentially hostile environments.
To this end, the cryptography community has addressed these concerns with what
is known as Privacy-Preserving Computation Techniques (PPCTs), a set of tools that
enable privacy-enhancing protocols where cleartext access to information is no longer
tenable. Of these techniques, Homomorphic Encryption (HE) stands out for its ability
to perform operations over encrypted data without compromising data confidentiality or
privacy. However, despite its promise, HE is still a relatively nascent solution with efficiency
and usability limitations. Improving the efficiency of HE has been a longstanding
challenge in the field of cryptography, and with improvements, the complexity of the
techniques has increased, especially for non-experts.
In this thesis, we address the problem of the complexity of HE when applied to DL.
We begin by systematizing existing knowledge in the field through an in-depth analysis
of state-of-the-art for privacy-preserving deep learning, identifying key trends, research
gaps, and issues associated with current approaches. One such identified gap lies in the
necessity for using vectorized algorithms with Packed Homomorphic Encryption (PaHE),
a state-of-the-art technique to reduce the overhead of HE in complex areas. This thesis
comprehensively analyzes existing algorithms and proposes new ones for using DL with
PaHE, presenting a formal analysis and usage guidelines for their implementation.
Parameter selection of HE schemes is another recurring challenge in the literature,
given that it plays a critical role in determining not only the security of the instantiation
but also the precision, performance, and degree of security of the scheme. To address
this challenge, this thesis proposes a novel system combining fuzzy logic with linear
programming tasks to produce secure parametrizations based on high-level user input
arguments without requiring low-level knowledge of the underlying primitives.
Finally, this thesis describes HEFactory, a symbolic execution compiler designed to
streamline the process of producing HE code and integrating it with Python. HEFactory
implements the previous proposals presented in this thesis in an easy-to-use tool. It provides
a unique architecture that layers the challenges associated with HE and produces
simplified operations interpretable by low-level HE libraries. HEFactory significantly reduces
the overall complexity to code DL applications using HE, resulting in an 80% length
reduction from expert-written code while maintaining equivalent accuracy and efficiency.El aprendizaje profundo ha supuesto una notable transformación para muchos campos
que algunos han calificado como una nueva revolución tecnológica. La aparición de modelos
masivos ha aumentado la demanda de datos y plataformas informáticas, para lo cual,
la computación en la nube se ha convertido en la solución a la que recurrir. Sin embargo,
la permeabilidad del aprendizaje profundo y la computación en la nube se reduce en los
ámbitos de la privacidad que manejan con datos sensibles. Estas áreas exigen imperativamente
el uso de tecnologías de mejora de la privacidad que permitan un uso responsable,
ético y respetuoso con la privacidad de los datos en entornos potencialmente hostiles.
Con este fin, la comunidad criptográfica ha abordado estas preocupaciones con las
denominadas técnicas de la preservación de la privacidad en el cómputo, un conjunto de
herramientas que permiten protocolos de mejora de la privacidad donde el acceso a la información
en texto claro ya no es sostenible. Entre estas técnicas, el cifrado homomórfico
destaca por su capacidad para realizar operaciones sobre datos cifrados sin comprometer
la confidencialidad o privacidad de la información. Sin embargo, a pesar de lo prometedor
de esta técnica, sigue siendo una solución relativamente incipiente con limitaciones
de eficiencia y usabilidad. La mejora de la eficiencia del cifrado homomórfico en la
criptografía ha sido todo un reto, y, con las mejoras, la complejidad de las técnicas ha
aumentado, especialmente para los usuarios no expertos.
En esta tesis, abordamos el problema de la complejidad del cifrado homomórfico
cuando se aplica al aprendizaje profundo. Comenzamos sistematizando el conocimiento
existente en el campo a través de un análisis exhaustivo del estado del arte para el aprendizaje
profundo que preserva la privacidad, identificando las tendencias clave, las lagunas
de investigación y los problemas asociados con los enfoques actuales. Una de las
lagunas identificadas radica en el uso de algoritmos vectorizados con cifrado homomórfico
empaquetado, que es una técnica del estado del arte que reduce el coste del cifrado
homomórfico en áreas complejas. Esta tesis analiza exhaustivamente los algoritmos existentes
y propone nuevos algoritmos para el uso de aprendizaje profundo utilizando cifrado
homomórfico empaquetado, presentando un análisis formal y unas pautas de uso para su
implementación.
La selección de parámetros de los esquemas del cifrado homomórfico es otro reto recurrente
en la literatura, dado que juega un papel crítico a la hora de determinar no sólo la
seguridad de la instanciación, sino también la precisión, el rendimiento y el grado de seguridad del esquema. Para abordar este reto, esta tesis propone un sistema innovador que
combina la lógica difusa con tareas de programación lineal para producir parametrizaciones
seguras basadas en argumentos de entrada de alto nivel sin requerir conocimientos
de bajo nivel de las primitivas subyacentes.
Por último, esta tesis propone HEFactory, un compilador de ejecución simbólica diseñado
para agilizar el proceso de producción de código de cifrado homomórfico e integrarlo
con Python. HEFactory es la culminación de las propuestas presentadas en esta
tesis, proporcionando una arquitectura única que estratifica los retos asociados con el
cifrado homomórfico, produciendo operaciones simplificadas que pueden ser interpretadas
por bibliotecas de bajo nivel. Este enfoque permite a HEFactory reducir significativamente
la longitud total del código, lo que supone una reducción del 80% en la
complejidad de programación de aplicaciones de aprendizaje profundo que usan cifrado
homomórfico en comparación con el código escrito por expertos, manteniendo una precisión
equivalente.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidenta: María Isabel González Vasco.- Secretario: David Arroyo Guardeño.- Vocal: Antonis Michala
GPU fast multipole method with lambda-dynamics features
A significant and computationally most demanding part of molecular dynamics simulations is the calculation of long-range electrostatic interactions. Such interactions can be evaluated directly by the naïve pairwise summation algorithm, which is a ubiquitous showcase example for the compute power of graphics processing units (GPUS). However, the pairwise summation has O(N^2) computational complexity for N interacting particles; thus, an approximation method with a better scaling is required. Today, the prevalent method for such approximation in the field is particle mesh Ewald (PME). PME takes advantage of fast Fourier transforms (FFTS) to approximate the solution efficiently. However, as the underlying FFTS require all-to-all communication between ranks, PME runs into a communication bottleneck. Such communication overhead is negligible only for a moderate parallelization. With increased parallelization, as needed for high-performance applications, the usage of PME becomes unprofitable. Another PME drawback is its inability to perform constant pH simulations efficiently. In such simulations, the protonation states of a protein are allowed to change dynamically during the simulation. The description of this process requires a separate evaluation of the energies for each protonation state. This can not be calculated efficiently with PME as the algorithm requires a repeated FFT for each state, which leads to a linear overhead with respect to the number of states. For a fast approximation of pairwise Coulombic interactions, which does not suffer from PME drawbacks, the Fast Multipole Method (FMM) has been implemented and fully parallelized with CUDA. To assure the optimal FMM performance for diverse MD systems multiple parallelization strategies have been developed. The algorithm has been efficiently incorporated into GROMACS and subsequently tested to determine the optimal FMM parameter set for MD simulations. Finally, the FMM has been incorporated into GROMACS to allow for out-of-the-box electrostatic calculations. The performance of the single-GPU FMM implementation, tested in GROMACS 2019, achieves about a third of highly optimized CUDA PME performance when simulating systems with uniform particle distributions. However, the FMM is expected to outperform PME at high parallelization because the FMM global communication overhead is minimal compared to that of PME. Further, the FMM has been enhanced to provide the energies of an arbitrary number of titratable sites as needed in the constant-pH method. The extension is not fully optimized yet, but the first results show the strength of the FMM for constant pH simulations. For a relatively large system with half a million particles and more than a hundred titratable sites, a straightforward approach to compute alternative energies requires the repetition of a simulation for each state of the sites. The FMM calculates all energy terms only a factor 1.5 slower than a single simulation step. Further improvements of the GPU implementation are expected to yield even more speedup compared to the actual implementation.2021-11-1
- …