237 research outputs found
Inside the class of REGEX Languages
We study different possibilities of combining the concept of homomorphic replacement with regular expressions in order to investigate the class of languages given by extended regular expressions with backreferences (REGEX). It is shown in which regard existing and natural ways to do this fail to reach the expressive power of REGEX. Furthermore, the complexity of the membership problem for REGEX with a bounded number of backreferences is considered
Implementing Homomorphic Encryption Based Secure Feedback Control for Physical Systems
This paper is about an encryption based approach to the secure implementation
of feedback controllers for physical systems. Specifically, Paillier's
homomorphic encryption is used to digitally implement a class of linear dynamic
controllers, which includes the commonplace static gain and PID type feedback
control laws as special cases. The developed implementation is amenable to
Field Programmable Gate Array (FPGA) realization. Experimental results,
including timing analysis and resource usage characteristics for different
encryption key lengths, are presented for the realization of an inverted
pendulum controller; as this is an unstable plant, the control is necessarily
fast
Formalising Confluence in PVS
Confluence is a critical property of computational systems which is related
with determinism and non ambiguity and thus with other relevant computational
attributes of functional specifications and rewriting system as termination and
completion. Several criteria have been explored that guarantee confluence and
their formalisations provide further interesting information. This work
discusses topics and presents personal positions and views related with the
formalisation of confluence properties in the Prototype Verification System PVS
developed at our research group.Comment: In Proceedings DCM 2015, arXiv:1603.0053
LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly Transformers
Prior works have attempted to build private inference frameworks for
transformer-based large language models (LLMs) in a server-client setting,
where the server holds the model parameters and the client inputs the private
data for inference. However, these frameworks impose significant overhead when
the private inputs are forward propagated through the original LLMs. In this
paper, we show that substituting the computation- and communication-heavy
operators in the transformer architecture with privacy-computing friendly
approximations can greatly reduce the private inference costs with minor impact
on model performance. Compared to the state-of-the-art Iron (NeurIPS 2022), our
privacy-computing friendly model inference pipeline achieves a
acceleration in computation and an 80\% reduction in communication overhead,
while retaining nearly identical accuracy
- …