Compression-Aided Privacy and Inferential Separation in Machine Learning
Authors
Publication date
1 January 2025
Publisher
Abstract
The rapid proliferation of Internet of Things (IoT) devices and the demand for real-time data processing have raised significant concerns about data privacy in machine learning applications. This dissertation addresses these challenges through two key approaches: inferential separation and compression-aided privacy.In inferential separation, we develop methodologies to protect sensitive inferences drawn from high-rate data streams, without compromising data utility. This includes a theoretically grounded framework for protecting sensitive inferences in IoT systems, as well as Decoct-Net, a deep learning-based model designed to sanitize sensitive attributes without compromising non-sensitive information.
In the domain of compression-aided privacy, we explore techniques that remove sensitive information from computational models while maintaining their utility. This includes Spectral-DP, a spectral domain perturbation method that enhances the utility of differentially private learning through spectral filtering, and two theoretically rigorous approaches-Randomized Quantization with SGD (RQP-SGD) and Gaussian Sampling Quantization for Federated Learning (GSQ-FL)—which focus on achieving privacy and communication efficiency in resource-limited environments.
By combining theoretical insights with empirical validation, this dissertation demonstrates how sensitive information can be effectively removed from data and models. The proposed techniques provide significant advancements in privacy-preserving machine learning, particularly in IoT and edge computing environments, without sacrificing model performance.</p
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.