7 research outputs found
Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption
Designing privacy-preserving deep learning models is a major challenge within
the deep learning community. Homomorphic Encryption (HE) has emerged as one of
the most promising approaches in this realm, enabling the decoupling of
knowledge between the model owner and the data owner. Despite extensive
research and application of this technology, primarily in convolutional neural
networks, incorporating HE into transformer models has been challenging because
of the difficulties in converting these models into a polynomial form. We break
new ground by introducing the first polynomial transformer, providing the first
demonstration of secure inference over HE with transformers. This includes a
transformer architecture tailored for HE, alongside a novel method for
converting operators to their polynomial equivalent. This innovation enables us
to perform secure inference on LMs with WikiText-103. It also allows us to
perform image classification with CIFAR-100 and Tiny-ImageNet. Our models yield
results comparable to traditional methods, bridging the performance gap with
transformers of similar scale and underscoring the viability of HE for
state-of-the-art applications. Finally, we assess the stability of our models
and conduct a series of ablations to quantify the contribution of each model
component.Comment: 6 figure
High-Density Solid-State Memory Devices and Technologies
This Special Issue aims to examine high-density solid-state memory devices and technologies from various standpoints in an attempt to foster their continuous success in the future. Considering that broadening of the range of applications will likely offer different types of solid-state memories their chance in the spotlight, the Special Issue is not focused on a specific storage solution but rather embraces all the most relevant solid-state memory devices and technologies currently on stage. Even the subjects dealt with in this Special Issue are widespread, ranging from process and design issues/innovations to the experimental and theoretical analysis of the operation and from the performance and reliability of memory devices and arrays to the exploitation of solid-state memories to pursue new computing paradigms
Higher Order Polynomial Transformer for Fine-Grained Freezing of Gait Detection
Freezing of Gait (FoG) is a common symptom of Parkinson’s disease (PD), manifesting as a brief, episodic absence, or marked reduction in walking, despite a patient’s intention to move. Clinical assessment of FoG events from manual observations by experts is both time-consuming and highly subjective. Therefore, machine learning-based FoG identification methods would be desirable. In this article, we address this task as a fine-grained human action recognition problem based on vision inputs. A novel deep learning architecture, namely, higher order polynomial transformer (HP-Transformer), is proposed to incorporate pose and appearance feature sequences to formulate fine-grained FoG patterns. In particular, a higher order self-attention mechanism is proposed based on higher order polynomials. To this end, linear, bilinear, and trilinear transformers are formulated in pursuit of discriminative fine-grained representations. These representations are treated as multiple streams and further fused by a cross-order fusion strategy for FoG detection. Comprehensive experiments on a large in-house dataset collected during clinical assessments demonstrate the effectiveness of the proposed method, and an area under the receiver operating characteristic (ROC) curve (AUC) of 0.92 is achieved for detecting FoG
Attention Network for Video Based Freezing of Gait Detection
Freezing of gait (FoG) is a typical symptom of Parkinson's disease (PD), which is a brief, episodic absence or marked reduction despite the patients' intention of walking.
It is important to timely identify FoG events for clinical assessments. However, well-trained experts are required to identify FoG events, which is subjective and time-consuming.
Therefore, automatic FoG identification methods are highly demanded. In this study, we address this task as a human action detection problem based on vision inputs. Two novel attention based deep learning architectures, namely convolutional 3D attention network (C3DAN) and higher order polynomial transformer (HP-Transformer), are proposed to investigate fine-grained FoG patterns.
The C3DAN addresses the FoG detection task by exploring the appearance features in detail to learn an informative region for more effective detection. The network consists of two main parts: Spatial Attention Network (SAN) and 3-dimensional convolutional network (C3D). SAN aims to generate an attention regions from coarse to fine, while C3D extracts discriminative features. Our proposed approach is able to localize attention region without manual annotation and to extract discriminative features in an end-to-end way.
The HP-Transformer incorporates pose and appearance feature sequences to formulate fine-grained FoG patterns. In particular, higher order self-attentions are proposed based on higher order polynomials. To this end, linear, bilinear and trilinear transformers are formulated in pursuit of discriminative fine-grained representations. These representations are treated as multiple streams and further fused by a self-attention based fusion strategy for FoG detection.
Comprehensive experiments on a large in-house dataset collected during clinical assessments demonstrate the effectiveness of the proposed methods. The two methods both achieved promising results and in particular, the HP-Transformer achieved an AUC of 0.92 in the FoG detection task