1,552 research outputs found
Privacy-Preserving Restricted Boltzmann Machine
With the arrival of the big data era, it is predicted that distributed data mining will lead to an information technology revolution. To motivate different institutes to collaborate with each other, the crucial issue is to eliminate their concerns regarding data privacy. In this paper, we propose a privacy-preserving method for training a restricted boltzmann machine (RBM). The RBM can be got without revealing their private data to each other when using our privacy-preserving method. We provide a correctness and efficiency analysis of our algorithms. The comparative experiment shows that the accuracy is very close to the original RBM model
Privacy-Preserving Restricted Boltzmann Machine
With the arrival of the big data era, it is predicted that distributed data mining will lead to an information technology revolution. To motivate different institutes to collaborate with each other, the crucial issue is to eliminate their concerns regarding data privacy. In this paper, we propose a privacy-preserving method for training a restricted boltzmann machine (RBM). The RBM can be got without revealing their private data to each other when using our privacy-preserving method. We provide a correctness and efficiency analysis of our algorithms. The comparative experiment shows that the accuracy is very close to the original RBM model
Differentially Private Mixture of Generative Neural Networks
Generative models are used in a wide range of applications building on large
amounts of contextually rich information. Due to possible privacy violations of
the individuals whose data is used to train these models, however, publishing
or sharing generative models is not always viable. In this paper, we present a
novel technique for privately releasing generative models and entire
high-dimensional datasets produced by these models. We model the generator
distribution of the training data with a mixture of generative neural
networks. These are trained together and collectively learn the generator
distribution of a dataset. Data is divided into clusters, using a novel
differentially private kernel -means, then each cluster is given to separate
generative neural networks, such as Restricted Boltzmann Machines or
Variational Autoencoders, which are trained only on their own cluster using
differentially private gradient descent. We evaluate our approach using the
MNIST dataset, as well as call detail records and transit datasets, showing
that it produces realistic synthetic samples, which can also be used to
accurately compute arbitrary number of counting queries.Comment: A shorter version of this paper appeared at the 17th IEEE
International Conference on Data Mining (ICDM 2017). This is the full
version, published in IEEE Transactions on Knowledge and Data Engineering
(TKDE
Preserving Differential Privacy in Convolutional Deep Belief Networks
The remarkable development of deep learning in medicine and healthcare domain
presents obvious privacy issues, when deep neural networks are built on users'
personal and highly sensitive data, e.g., clinical records, user profiles,
biomedical images, etc. However, only a few scientific studies on preserving
privacy in deep learning have been conducted. In this paper, we focus on
developing a private convolutional deep belief network (pCDBN), which
essentially is a convolutional deep belief network (CDBN) under differential
privacy. Our main idea of enforcing epsilon-differential privacy is to leverage
the functional mechanism to perturb the energy-based objective functions of
traditional CDBNs, rather than their results. One key contribution of this work
is that we propose the use of Chebyshev expansion to derive the approximate
polynomial representation of objective functions. Our theoretical analysis
shows that we can further derive the sensitivity and error bounds of the
approximate polynomial representation. As a result, preserving differential
privacy in CDBNs is feasible. We applied our model in a health social network,
i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for
human behavior prediction, human behavior classification, and handwriting digit
recognition tasks. Theoretical analysis and rigorous experimental evaluations
show that the pCDBN is highly effective. It significantly outperforms existing
solutions
Synthetic Observational Health Data with GANs: from slow adoption to a boom in medical research and ultimately digital twins?
After being collected for patient care, Observational Health Data (OHD) can
further benefit patient well-being by sustaining the development of health
informatics and medical research. Vast potential is unexploited because of the
fiercely private nature of patient-related data and regulations to protect it.
Generative Adversarial Networks (GANs) have recently emerged as a
groundbreaking way to learn generative models that produce realistic synthetic
data. They have revolutionized practices in multiple domains such as
self-driving cars, fraud detection, digital twin simulations in industrial
sectors, and medical imaging.
The digital twin concept could readily apply to modelling and quantifying
disease progression. In addition, GANs posses many capabilities relevant to
common problems in healthcare: lack of data, class imbalance, rare diseases,
and preserving privacy. Unlocking open access to privacy-preserving OHD could
be transformative for scientific research. In the midst of COVID-19, the
healthcare system is facing unprecedented challenges, many of which of are data
related for the reasons stated above.
Considering these facts, publications concerning GAN applied to OHD seemed to
be severely lacking. To uncover the reasons for this slow adoption, we broadly
reviewed the published literature on the subject. Our findings show that the
properties of OHD were initially challenging for the existing GAN algorithms
(unlike medical imaging, for which state-of-the-art model were directly
transferable) and the evaluation synthetic data lacked clear metrics.
We find more publications on the subject than expected, starting slowly in
2017, and since then at an increasing rate. The difficulties of OHD remain, and
we discuss issues relating to evaluation, consistency, benchmarking, data
modelling, and reproducibility.Comment: 31 pages (10 in previous version), not including references and
glossary, 51 in total. Inclusion of a large number of recent publications and
expansion of the discussion accordingl
Designing and Executing Security Solutions for IoT-5G Environment
The integration of the Internet of Things (IoT) with 5G networks presents a transformative approach to modern connectivity solutions, yet it introduces significant security challenges. This paper focuses on the design and implementation of robust security techniques tailored for IoT-5G systems. We commence by analyzing the unique security requirements posed by the confluence of IoT devices and 5G technology, emphasizing the need for advanced security protocols to address increased data volumes, device heterogeneity, and potential vulnerabilities. Subsequently, we propose a comprehensive security framework that includes innovative encryption methods, intrusion detection systems, and secure communication protocols specifically developed for the IoT-5G environment. Our approach integrates multi-layered security mechanisms to ensure data integrity, confidentiality, and availability across the network. The effectiveness of our proposed techniques is demonstrated through a series of simulations and real-world deployments, showcasing significant enhancements in security and resilience for IoT-5G systems. This study contributes to the field by providing a practical and scalable security solution, paving the way for secure and reliable IoT-5G integration in various applications, from smart cities to industrial automation
- …