1,952 research outputs found
Paralinguistic Privacy Protection at the Edge
Voice user interfaces and digital assistants are rapidly entering our lives
and becoming singular touch points spanning our devices. These always-on
services capture and transmit our audio data to powerful cloud services for
further processing and subsequent actions. Our voices and raw audio signals
collected through these devices contain a host of sensitive paralinguistic
information that is transmitted to service providers regardless of deliberate
or false triggers. As our emotional patterns and sensitive attributes like our
identity, gender, mental well-being, are easily inferred using deep acoustic
models, we encounter a new generation of privacy risks by using these services.
One approach to mitigate the risk of paralinguistic-based privacy breaches is
to exploit a combination of cloud-based processing with privacy-preserving,
on-device paralinguistic information learning and filtering before transmitting
voice data. In this paper we introduce EDGY, a configurable, lightweight,
disentangled representation learning framework that transforms and filters
high-dimensional voice data to identify and contain sensitive attributes at the
edge prior to offloading to the cloud. We evaluate EDGY's on-device performance
and explore optimization techniques, including model quantization and knowledge
distillation, to enable private, accurate and efficient representation learning
on resource-constrained devices. Our results show that EDGY runs in tens of
milliseconds with 0.2% relative improvement in ABX score or minimal performance
penalties in learning linguistic representations from raw voice signals, using
a CPU and a single-core ARM processor without specialized hardware.Comment: 14 pages, 7 figures. arXiv admin note: text overlap with
arXiv:2007.1506
Towards A Framework for Privacy-Preserving Pedestrian Analysis
The design of pedestrian-friendly infrastructures plays a crucial role in creating sustainable transportation in urban environments. Analyzing pedestrian behaviour in response to existing infrastructure is pivotal to planning, maintaining, and creating more pedestrian-friendly facilities. Many approaches have been proposed to extract such behaviour by applying deep learning models to video data. Video data, however, includes an broad spectrum of privacy-sensitive information about individuals, such as their location at a given time or who they are with. Most of the existing models use privacy-invasive methodologies to track, detect, and analyse individual or group pedestrian behaviour patterns. As a step towards privacy-preserving pedestrian analysis, this paper introduces a framework to anonymize all pedestrians before analyzing their behaviors. The proposed framework leverages recent developments in 3D wireframe reconstruction and digital in-painting to represent pedestrians with quantitative wireframes by removing their images while preserving pose, shape, and background scene context. To evaluate the proposed framework, a generic metric is introduced for each of privacy and utility. Experimental evaluation on widely-used datasets shows that the proposed framework outperforms traditional and state-of-the-art image filtering approaches by generating best privacy utility trade-off
A Survey of Multimodal Information Fusion for Smart Healthcare: Mapping the Journey from Data to Wisdom
Multimodal medical data fusion has emerged as a transformative approach in
smart healthcare, enabling a comprehensive understanding of patient health and
personalized treatment plans. In this paper, a journey from data to information
to knowledge to wisdom (DIKW) is explored through multimodal fusion for smart
healthcare. We present a comprehensive review of multimodal medical data fusion
focused on the integration of various data modalities. The review explores
different approaches such as feature selection, rule-based systems, machine
learning, deep learning, and natural language processing, for fusing and
analyzing multimodal data. This paper also highlights the challenges associated
with multimodal fusion in healthcare. By synthesizing the reviewed frameworks
and theories, it proposes a generic framework for multimodal medical data
fusion that aligns with the DIKW model. Moreover, it discusses future
directions related to the four pillars of healthcare: Predictive, Preventive,
Personalized, and Participatory approaches. The components of the comprehensive
survey presented in this paper form the foundation for more successful
implementation of multimodal fusion in smart healthcare. Our findings can guide
researchers and practitioners in leveraging the power of multimodal fusion with
the state-of-the-art approaches to revolutionize healthcare and improve patient
outcomes.Comment: This work has been submitted to the ELSEVIER for possible
publication. Copyright may be transferred without notice, after which this
version may no longer be accessibl
Preserving Differential Privacy in Convolutional Deep Belief Networks
The remarkable development of deep learning in medicine and healthcare domain
presents obvious privacy issues, when deep neural networks are built on users'
personal and highly sensitive data, e.g., clinical records, user profiles,
biomedical images, etc. However, only a few scientific studies on preserving
privacy in deep learning have been conducted. In this paper, we focus on
developing a private convolutional deep belief network (pCDBN), which
essentially is a convolutional deep belief network (CDBN) under differential
privacy. Our main idea of enforcing epsilon-differential privacy is to leverage
the functional mechanism to perturb the energy-based objective functions of
traditional CDBNs, rather than their results. One key contribution of this work
is that we propose the use of Chebyshev expansion to derive the approximate
polynomial representation of objective functions. Our theoretical analysis
shows that we can further derive the sensitivity and error bounds of the
approximate polynomial representation. As a result, preserving differential
privacy in CDBNs is feasible. We applied our model in a health social network,
i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for
human behavior prediction, human behavior classification, and handwriting digit
recognition tasks. Theoretical analysis and rigorous experimental evaluations
show that the pCDBN is highly effective. It significantly outperforms existing
solutions
Are Diffusion Models Vulnerable to Membership Inference Attacks?
Diffusion-based generative models have shown great potential for image
synthesis, but there is a lack of research on the security and privacy risks
they may pose. In this paper, we investigate the vulnerability of diffusion
models to Membership Inference Attacks (MIAs), a common privacy concern. Our
results indicate that existing MIAs designed for GANs or VAE are largely
ineffective on diffusion models, either due to inapplicable scenarios (e.g.,
requiring the discriminator of GANs) or inappropriate assumptions (e.g., closer
distances between synthetic samples and member samples). To address this gap,
we propose Step-wise Error Comparing Membership Inference (SecMI), a
query-based MIA that infers memberships by assessing the matching of forward
process posterior estimation at each timestep. SecMI follows the common
overfitting assumption in MIA where member samples normally have smaller
estimation errors, compared with hold-out samples. We consider both the
standard diffusion models, e.g., DDPM, and the text-to-image diffusion models,
e.g., Latent Diffusion Models and Stable Diffusion. Experimental results
demonstrate that our methods precisely infer the membership with high
confidence on both of the two scenarios across multiple different datasets.
Code is available at https://github.com/jinhaoduan/SecMI.Comment: To appear in ICML 202
- …