1,655 research outputs found
Recommended from our members
A Study of High Performance Multiple Precision Arithmetic on Graphics Processing Units
Multiple precision (MP) arithmetic is a core building block of a wide variety of algorithms in computational mathematics and computer science. In mathematics MP is used in computational number theory, geometric computation, experimental mathematics, and in some random matrix problems. In computer science, MP arithmetic is primarily used in cryptographic algorithms: securing communications, digital signatures, and code breaking. In most of these application areas, the factor that limits performance is the MP arithmetic. The focus of our research is to build and analyze highly optimized libraries that allow the MP operations to be offloaded from the CPU to the GPU. Our goal is to achieve an order of magnitude improvement over the CPU in three key metrics: operations per second per socket, operations per watt, and operation per second per dollar. What we find is that the SIMD design and balance of compute, cache, and bandwidth resources on the GPU is quite different from the CPU, so libraries such as GMP cannot simply be ported to the GPU. New approaches and algorithms are required to achieve high performance and high utilization of GPU resources. Further, we find that low-level ISA differences between GPU generations means that an approach that works well on one generation might not run well on the next.
Here we report on our progress towards MP arithmetic libraries on the GPU in four areas: (1) large integer addition, subtraction, and multiplication; (2) high performance modular multiplication and modular exponentiation (the key operations for cryptographic algorithms) across generations of GPUs; (3) high precision floating point addition, subtraction, multiplication, division, and square root; (4) parallel short division, which we prove is asymptotically optimal on EREW and CREW PRAMs
Utilizing the Double-Precision Floating-Point Computing Power of GPUs for RSA Acceleration
Asymmetric cryptographic algorithm (e.g., RSA and Elliptic Curve Cryptography) implementations on Graphics Processing Units (GPUs) have been researched for over a decade. The basic idea of most previous contributions is exploiting the highly parallel GPU architecture and porting the integer-based algorithms from general-purpose CPUs to GPUs, to offer high performance. However, the great potential cryptographic computing power of GPUs, especially by the more powerful floating-point instructions, has not been comprehensively investigated in fact. In this paper, we fully exploit the floating-point computing power of GPUs, by various designs, including the floating-point-based Montgomery multiplication/exponentiation algorithm and Chinese Remainder Theorem (CRT) implementation in GPU. And for practical usage of the proposed algorithm, a new method is performed to convert the input/output between octet strings and floating-point numbers, fully utilizing GPUs and further promoting the overall performance by about 5%. The performance of RSA-2048/3072/4096 decryption on NVIDIA GeForce GTX TITAN reaches 42,211/12,151/5,790 operations per second, respectively, which achieves 13 times the performance of the previous fastest floating-point-based implementation (published in Eurocrypt 2009). The RSA-4096 decryption precedes the existing fastest integer-based result by 23%
A Scalable Correlator Architecture Based on Modular FPGA Hardware, Reuseable Gateware, and Data Packetization
A new generation of radio telescopes is achieving unprecedented levels of
sensitivity and resolution, as well as increased agility and field-of-view, by
employing high-performance digital signal processing hardware to phase and
correlate large numbers of antennas. The computational demands of these imaging
systems scale in proportion to BMN^2, where B is the signal bandwidth, M is the
number of independent beams, and N is the number of antennas. The
specifications of many new arrays lead to demands in excess of tens of PetaOps
per second.
To meet this challenge, we have developed a general purpose correlator
architecture using standard 10-Gbit Ethernet switches to pass data between
flexible hardware modules containing Field Programmable Gate Array (FPGA)
chips. These chips are programmed using open-source signal processing libraries
we have developed to be flexible, scalable, and chip-independent. This work
reduces the time and cost of implementing a wide range of signal processing
systems, with correlators foremost among them,and facilitates upgrading to new
generations of processing technology. We present several correlator
deployments, including a 16-antenna, 200-MHz bandwidth, 4-bit, full Stokes
parameter application deployed on the Precision Array for Probing the Epoch of
Reionization.Comment: Accepted to Publications of the Astronomy Society of the Pacific. 31
pages. v2: corrected typo, v3: corrected Fig. 1
Elliptic Curve Cryptography on Modern Processor Architectures
Abstract
Elliptic Curve Cryptography (ECC) has been adopted by the US National Security Agency (NSA) in Suite "B" as part of its "Cryptographic Modernisation Program ". Additionally,
it has been favoured by an entire host of mobile devices due to its superior performance characteristics. ECC is also the building block on which the exciting field of pairing/identity based cryptography is based. This widespread use means that there is potentially a lot to be gained by researching efficient implementations on modern processors such as IBM's Cell Broadband Engine and Philip's next generation smart card cores. ECC operations can be thought of as a pyramid of building blocks, from instructions on a core, modular operations on a finite field, point addition & doubling, elliptic curve scalar
multiplication to application level protocols. In this thesis we examine an implementation of these components for ECC focusing on a range of optimising techniques for the Cell's SPU and the MIPS smart card. We show significant performance improvements that can be achieved through of adoption of EC
TREBUCHET: Fully Homomorphic Encryption Accelerator for Deep Computation
Secure computation is of critical importance to not only the DoD, but across
financial institutions, healthcare, and anywhere personally identifiable
information (PII) is accessed. Traditional security techniques require data to
be decrypted before performing any computation. When processed on untrusted
systems the decrypted data is vulnerable to attacks to extract the sensitive
information. To address these vulnerabilities Fully Homomorphic Encryption
(FHE) keeps the data encrypted during computation and secures the results, even
in these untrusted environments. However, FHE requires a significant amount of
computation to perform equivalent unencrypted operations. To be useful, FHE
must significantly close the computation gap (within 10x) to make encrypted
processing practical. To accomplish this ambitious goal the TREBUCHET project
is leading research and development in FHE processing hardware to accelerate
deep computations on encrypted data, as part of the DARPA MTO Data Privacy for
Virtual Environments (DPRIVE) program. We accelerate the major secure
standardized FHE schemes (BGV, BFV, CKKS, FHEW, etc.) at >=128-bit security
while integrating with the open-source PALISADE and OpenFHE libraries currently
used in the DoD and in industry. We utilize a novel tile-based chip design with
highly parallel ALUs optimized for vectorized 128b modulo arithmetic. The
TREBUCHET coprocessor design provides a highly modular, flexible, and
extensible FHE accelerator for easy reconfiguration, deployment, integration
and application on other hardware form factors, such as System-on-Chip or
alternate chip areas.Comment: 6 pages, 5figures, 2 table
Recommended from our members
RSA in hardware
textThis report presents the RSA encryption and decryption schemes and discusses several methods for expediting the computations required, specifically the modular exponentiation operation that is required for RSA. A hardware implementation of the CIOS (Coarsely Integrated Operand Scanning) algorithm for modular multiplication is attempted on a XILINX Spartan3 FPGA in the TLL-5000 development platform used at the University of Texas at Austin. The development of the hardware is discussed in detail and some Verilog source code is provided for an implementation of modular multiplication. Some source code is also provided for an RSA executable to run on the TLL-6219 ARM-based development platform, to be used to generate test vectors.Electrical and Computer Engineerin
Training deep neural networks with low precision multiplications
Multipliers are the most space and power-hungry arithmetic operators of the
digital implementation of deep neural networks. We train a set of
state-of-the-art neural networks (Maxout networks) on three benchmark datasets:
MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats:
floating point, fixed point and dynamic fixed point. For each of those datasets
and for each of those formats, we assess the impact of the precision of the
multiplications on the final error after training. We find that very low
precision is sufficient not just for running trained networks but also for
training them. For example, it is possible to train Maxout networks with 10
bits multiplications.Comment: 10 pages, 5 figures, Accepted as a workshop contribution at ICLR 201
- …