319 research outputs found
Zero-knowledge Proof Meets Machine Learning in Verifiability: A Survey
With the rapid advancement of artificial intelligence technology, the usage
of machine learning models is gradually becoming part of our daily lives.
High-quality models rely not only on efficient optimization algorithms but also
on the training and learning processes built upon vast amounts of data and
computational power. However, in practice, due to various challenges such as
limited computational resources and data privacy concerns, users in need of
models often cannot train machine learning models locally. This has led them to
explore alternative approaches such as outsourced learning and federated
learning. While these methods address the feasibility of model training
effectively, they introduce concerns about the trustworthiness of the training
process since computations are not performed locally. Similarly, there are
trustworthiness issues associated with outsourced model inference. These two
problems can be summarized as the trustworthiness problem of model
computations: How can one verify that the results computed by other
participants are derived according to the specified algorithm, model, and input
data? To address this challenge, verifiable machine learning (VML) has emerged.
This paper presents a comprehensive survey of zero-knowledge proof-based
verifiable machine learning (ZKP-VML) technology. We first analyze the
potential verifiability issues that may exist in different machine learning
scenarios. Subsequently, we provide a formal definition of ZKP-VML. We then
conduct a detailed analysis and classification of existing works based on their
technical approaches. Finally, we discuss the key challenges and future
directions in the field of ZKP-based VML
Privacy-Preserving Outsourcing of Large-Scale Nonlinear Programming to the Cloud
The increasing massive data generated by various sources has given birth to
big data analytics. Solving large-scale nonlinear programming problems (NLPs)
is one important big data analytics task that has applications in many domains
such as transport and logistics. However, NLPs are usually too computationally
expensive for resource-constrained users. Fortunately, cloud computing provides
an alternative and economical service for resource-constrained users to
outsource their computation tasks to the cloud. However, one major concern with
outsourcing NLPs is the leakage of user's private information contained in NLP
formulations and results. Although much work has been done on
privacy-preserving outsourcing of computation tasks, little attention has been
paid to NLPs. In this paper, we for the first time investigate secure
outsourcing of general large-scale NLPs with nonlinear constraints. A secure
and efficient transformation scheme at the user side is proposed to protect
user's private information; at the cloud side, generalized reduced gradient
method is applied to effectively solve the transformed large-scale NLPs. The
proposed protocol is implemented on a cloud computing testbed. Experimental
evaluations demonstrate that significant time can be saved for users and the
proposed mechanism has the potential for practical use.Comment: Ang Li and Wei Du equally contributed to this work. This work was
done when Wei Du was at the University of Arkansas. 2018 EAI International
Conference on Security and Privacy in Communication Networks (SecureComm
Verifiable Encodings for Secure Homomorphic Analytics
Homomorphic encryption, which enables the execution of arithmetic operations
directly on ciphertexts, is a promising solution for protecting privacy of
cloud-delegated computations on sensitive data. However, the correctness of the
computation result is not ensured. We propose two error detection encodings and
build authenticators that enable practical client-verification of cloud-based
homomorphic computations under different trade-offs and without compromising on
the features of the encryption algorithm. Our authenticators operate on top of
trending ring learning with errors based fully homomorphic encryption schemes
over the integers. We implement our solution in VERITAS, a ready-to-use system
for verification of outsourced computations executed over encrypted data. We
show that contrary to prior work VERITAS supports verification of any
homomorphic operation and we demonstrate its practicality for various
applications, such as ride-hailing, genomic-data analysis, encrypted search,
and machine-learning training and inference.Comment: update authors, typos corrected, scheme update
Cryptographic Tools for Privacy Preservation
Data permeates every aspect of our daily life and it is the backbone of our digitalized society. Smartphones, smartwatches and many more smart devices measure, collect, modify and share data in what is known as the Internet of Things.Often, these devices don’t have enough computation power/storage space thus out-sourcing some aspects of the data management to the Cloud. Outsourcing computation/storage to a third party poses natural questions regarding the security and privacy of the shared sensitive data.Intuitively, Cryptography is a toolset of primitives/protocols of which security prop- erties are formally proven while Privacy typically captures additional social/legislative requirements that relate more to the concept of “trust” between people, “how” data is used and/or “who” has access to data. This thesis separates the concepts by introducing an abstract model that classifies data leaks into different types of breaches. Each class represents a specific requirement/goal related to cryptography, e.g. confidentiality or integrity, or related to privacy, e.g. liability, sensitive data management and more.The thesis contains cryptographic tools designed to provide privacy guarantees for different application scenarios. In more details, the thesis:(a) defines new encryption schemes that provide formal privacy guarantees such as theoretical privacy definitions like Differential Privacy (DP), or concrete privacy-oriented applications covered by existing regulations such as the European General Data Protection Regulation (GDPR);(b) proposes new tools and procedures for providing verifiable computation’s guarantees in concrete scenarios for post-quantum cryptography or generalisation of signature schemes;(c) proposes a methodology for utilising Machine Learning (ML) for analysing the effective security and privacy of a crypto-tool and, dually, proposes a secure primitive that allows computing specific ML algorithm in a privacy-preserving way;(d) provides an alternative protocol for secure communication between two parties, based on the idea of communicating in a periodically timed fashion
Survey on securing data storage in the cloud
Cloud Computing has become a well-known primitive nowadays; many researchers and companies are embracing this fascinating technology with feverish haste. In the meantime, security and privacy challenges are brought forward while the number of cloud storage user increases expeditiously. In this work, we conduct an in-depth survey on recent research activities of cloud storage security in association with cloud computing. After an overview of the cloud storage system and its security problem, we focus on the key security requirement triad, i.e., data integrity, data confidentiality, and availability. For each of the three security objectives, we discuss the new unique challenges faced by the cloud storage services, summarize key issues discussed in the current literature, examine, and compare the existing and emerging approaches proposed to meet those new challenges, and point out possible extensions and futuristic research opportunities. The goal of our paper is to provide a state-of-the-art knowledge to new researchers who would like to join this exciting new field
Public Integrity Auditing for Dynamic Data Sharing With Multiuser Modification
In cloud storage systems, information proprietors have their information on cloud servers furthermore, clients (information customers) can get to the information from cloud servers. Because of the information outsourcing, be that as it may, this new worldview of information facilitating administration additionally presents new security challenges, which requires an autonomous evaluating administration to check the information honesty in the cloud. In huge scale distributed storage frameworks, the information might be refreshed powerfully, so existing remote uprightness checking strategies served for static chronicle information are no longer appropriate to check the information uprightness. Accordingly, a proficient and secure dynamic inspecting convention is wanted to persuade information proprietors that the information is accurately put away in the cloud. In this section, we initially present an evaluating structure for cloud capacity frameworks. At that point, we depict Third-party Auditing Scheme a proficient and security saving evaluating convention for distributed storage, which can likewise bolster information dynamic operations and cluster reviewing for both various proprietors what's more
Security and Privacy in the Internet of Things
The Internet of Things (IoT) is an emerging paradigm that seamlessly integrates electronic devices with sensing and computing capability into the Internet to achieve intelligent processing and optimized controlling. In a connected world built through IoT, where interconnected devices are extending to every facet of our lives, including our homes, offices, utility infrastructures and even our bodies, we are able to do things in a way that we never before imagined. However, as IoT redefines the possibilities in environment, society and economy, creating tremendous benefits, significant security and privacy concerns arise such as personal information confidentiality, and secure communication and computation. Theoretically, when everything is connected, everything is at risk. The ubiquity of connected things gives adversaries more attack vectors and more possibilities, and thus more catastrophic consequences by cybercrimes. Therefore, it is very critical to move fast to address these rising security and privacy concerns in IoT systems before severe disasters happen. In this dissertation, we mainly address the challenges in two domains: (1) how to protect IoT devices against cyberattacks; (2) how to protect sensitive data during storage, dissemination and utilization for IoT applications. In the first part, we present how to leverage anonymous communication techniques, particularly Tor, to protect the security of IoT devices. We first propose two schemes to enhance the security of smart home by integrating Tor hidden services into IoT gateway for users with performance preference. Then, we propose a multipath-routing based architecture for Tor hidden services to enhance its resistance against traffic analysis attacks, and thus improving the protection for smart home users who desire very strong security but care less about performance. In the second part of this dissertation, we explore the solutions to protect the data for IoT applications. First, we present a reliable, searchable and privacy-preserving e-healthcare system, which takes advantage of emerging cloud storage and IoT infrastructure and enables healthcare service providers (HSPs) to realize remote patient monitoring in a secure and regulatory compliant manner. Then, we turn our attention to the data analysis in IoT applications, which is one of the core components of IoT applications. We propose a cloud-assisted, privacy-preserving machine learning classification scheme over encrypted data for IoT devices. Our scheme is based on a three-party model coupled with a two-stage decryption Paillier-based cryptosystem, which allows a cloud server to interact with machine learning service providers (MLSPs) and conduct computation intensive classification on behalf of the resourced-constrained IoT devices in a privacy-preserving manner. Finally, we explore the problem of privacy-preserving targeted broadcast in IoT, and propose two multi-cloud-based outsourced-ABE (attribute-based encryption) schemes. They enable the receivers to partially outsource the computationally expensive decryption operations to the clouds, while preventing attributes from being disclosed
- …