8,720 research outputs found
A new method for format preserving encryption in high-data rate communications
In some encryption systems it is necessary to preserve the format and length of the encrypted data. This kind of encryption is called FPE (Format Preserving Encryption). Currently, only two AES (Advanced Encryption Standard) modes of operation recommended by the NIST (National Institute of Standards and Technology) are able to implement FPE algorithms, FF1 and FF3. These modes work in an electronic codebook fashion and can be configured to encrypt databases with an arbitrary format and length. However, there are no stream cipher proposals able to implement FPE encryption for high data rate information flows. The main novelty of this work is a new block cipher operation mode proposal to implement an FPE algorithm in a stream cipher fashion. It has been called CTR-MOD and it is based on a standard block cipher working in CTR (Counter) mode and a modulo operation. The confidentiality of this mode is analyzed in terms of its IND- CPA (Indistinguishability under Chosen Plaintext Attack) advantage of any adversary attacking it. Moreover, the encryption scheme has been implemented on an FPGA (Field Programmable Gate Array) and has been integrated in a Gigabit Ethernet interface to test an encrypted optical link with a real high data rate traffic flow
A Hybrid Approach to Privacy-Preserving Federated Learning
Federated learning facilitates the collaborative training of models without
the sharing of raw data. However, recent attacks demonstrate that simply
maintaining data locality during training processes does not provide sufficient
privacy guarantees. Rather, we need a federated learning system capable of
preventing inference over both the messages exchanged during training and the
final trained model while ensuring the resulting model also has acceptable
predictive accuracy. Existing federated learning approaches either use secure
multiparty computation (SMC) which is vulnerable to inference or differential
privacy which can lead to low accuracy given a large number of parties with
relatively small amounts of data each. In this paper, we present an alternative
approach that utilizes both differential privacy and SMC to balance these
trade-offs. Combining differential privacy with secure multiparty computation
enables us to reduce the growth of noise injection as the number of parties
increases without sacrificing privacy while maintaining a pre-defined rate of
trust. Our system is therefore a scalable approach that protects against
inference threats and produces models with high accuracy. Additionally, our
system can be used to train a variety of machine learning models, which we
validate with experimental results on 3 different machine learning algorithms.
Our experiments demonstrate that our approach out-performs state of the art
solutions
- …