31 research outputs found
The Multi-Lane Capsule Network (MLCN)
We introduce Multi-Lane Capsule Networks (MLCN), which are a separable and
resource efficient organization of Capsule Networks (CapsNet) that allows
parallel processing, while achieving high accuracy at reduced cost. A MLCN is
composed of a number of (distinct) parallel lanes, each contributing to a
dimension of the result, trained using the routing-by-agreement organization of
CapsNet. Our results indicate similar accuracy with a much reduced cost in
number of parameters for the Fashion-MNIST and Cifar10 datsets. They also
indicate that the MLCN outperforms the original CapsNet when using a proposed
novel configuration for the lanes. MLCN also has faster training and inference
times, being more than two-fold faster than the original CapsNet in the same
accelerator
MOSFHET: Optimized Software for FHE over the Torus
Homomorphic encryption is one of the most secure solutions for processing sensitive information in untrusted environments, and there have been many recent advances towards its efficient implementation for the evaluation of linear functions and approximated arithmetic. However, the practical performance when evaluating arbitrary (nonlinear) functions is still a major challenge for HE schemes. The TFHE scheme [Chillotti et al., 2016] is the current state-of-the-art for the evaluation of arbitrary functions, and, in this work, we focus on improving its performance. We divide this paper into two parts. First, we review and implement the main techniques to improve performance or error behavior in TFHE proposed so far. For many, this is the first practical implementation. Then, we introduce novel improvements to several of them and new approaches to implement some commonly used procedures. We also show which proposals can be suitably combined to achieve better results. We provide a single library containing all the reviewed techniques as well as our original contributions. Our implementation is up to 1.2 times faster than previous ones with a similar optimization level, and our novel techniques provide speedups of up to 2.83 times on algorithms such as the Full-Domain Functional Bootstrap (FDFB)