30 research outputs found
Zero-knowledge Proof Meets Machine Learning in Verifiability: A Survey
With the rapid advancement of artificial intelligence technology, the usage
of machine learning models is gradually becoming part of our daily lives.
High-quality models rely not only on efficient optimization algorithms but also
on the training and learning processes built upon vast amounts of data and
computational power. However, in practice, due to various challenges such as
limited computational resources and data privacy concerns, users in need of
models often cannot train machine learning models locally. This has led them to
explore alternative approaches such as outsourced learning and federated
learning. While these methods address the feasibility of model training
effectively, they introduce concerns about the trustworthiness of the training
process since computations are not performed locally. Similarly, there are
trustworthiness issues associated with outsourced model inference. These two
problems can be summarized as the trustworthiness problem of model
computations: How can one verify that the results computed by other
participants are derived according to the specified algorithm, model, and input
data? To address this challenge, verifiable machine learning (VML) has emerged.
This paper presents a comprehensive survey of zero-knowledge proof-based
verifiable machine learning (ZKP-VML) technology. We first analyze the
potential verifiability issues that may exist in different machine learning
scenarios. Subsequently, we provide a formal definition of ZKP-VML. We then
conduct a detailed analysis and classification of existing works based on their
technical approaches. Finally, we discuss the key challenges and future
directions in the field of ZKP-based VML
Privacy preservation in peer-to-peer gossiping networks in presence of a passive adversary
In the Web 2.0, more and more personal data are released by users (queries, social networks, geo-located data, ...), which create a huge pool of useful information to leverage in the context of search or recommendation for instance. In fully decentralized systems, tapping on the power of this information usually involves a clustering process that relies on an exchange of personal data (such as user proles) to compute the similarity between users. In this internship, we address the problem of computing similarity between users while preserving their privacy and without relying on a central entity, with regards to a passive adversary
Using Elimination Theory to construct Rigid Matrices
The rigidity of a matrix A for target rank r is the minimum number of entries
of A that must be changed to ensure that the rank of the altered matrix is at
most r. Since its introduction by Valiant (1977), rigidity and similar
rank-robustness functions of matrices have found numerous applications in
circuit complexity, communication complexity, and learning complexity. Almost
all nxn matrices over an infinite field have a rigidity of (n-r)^2. It is a
long-standing open question to construct infinite families of explicit matrices
even with superlinear rigidity when r = Omega(n).
In this paper, we construct an infinite family of complex matrices with the
largest possible, i.e., (n-r)^2, rigidity. The entries of an n x n matrix in
this family are distinct primitive roots of unity of orders roughly exp(n^2 log
n). To the best of our knowledge, this is the first family of concrete (but not
entirely explicit) matrices having maximal rigidity and a succinct algebraic
description.
Our construction is based on elimination theory of polynomial ideals. In
particular, we use results on the existence of polynomials in elimination
ideals with effective degree upper bounds (effective Nullstellensatz). Using
elementary algebraic geometry, we prove that the dimension of the affine
variety of matrices of rigidity at most k is exactly n^2-(n-r)^2+k. Finally, we
use elimination theory to examine whether the rigidity function is
semi-continuous.Comment: 25 Pages, minor typos correcte
Décidabilité et Complexité
International audienceL'informatique fondamentale est un vaste sujet, comme en témoignent les 2 283 et 3 176 pages des "Handbooks" (228; 1). Couvrir en quelques dizaines de pages, l'ensemble de l'in- formatique nous a semblé une entreprise hors de notre portée. De ce fait, nous nous sommes concentrés sur la notion de calcul, sujet qui reflète le goût et la passion des auteurs de ce chapitre. La notion de calcul est omniprésente et aussi ancienne que les mathématiques