9 research outputs found
Approximate Thumbnail Preserving Encryption
Thumbnail preserving encryption (TPE) was suggested by Wright et al. as a way to balance privacy and usability for online image sharing. The idea is to encrypt a plaintext image into a ciphertext image that has roughly the same thumbnail as well as retaining the original image format. At the same time, TPE allows users to take advantage of much of the functionality of online photo management tools, while still providing some level of privacy against the service provider.
In this work we present three new approximate TPE encryption schemes. In our schemes, ciphertexts and plaintexts have perceptually similar, but not identical, thumbnails. Our constructions are the first TPE schemes designed to work well with JPEG compression. In addition, we show that they also have provable security guarantees that characterize precisely what information about the plaintext is leaked by the ciphertext image.
We empirically evaluate our schemes according to the similarity of plaintext and ciphertext thumbnails, increase in file size under JPEG compression, preservation of perceptual image hashes, among other aspects. We also show how approximate TPE can be an effective tool to thwart inference attacks by machine-learning image classifiers, which have shown to be effective against other image obfuscation techniques
Recommended from our members
Two Sides of a Coin: Adversarial-Based Image Privacy and Defending Against Adversarial Perturbations for Robust CNNs
Emergence of highly accurate Convolutional Neural Networks (CNNs) with the capability to process large datasets, has led to their popularity in many applications, including safety/security-sensitive (e.g. disease recognition, self-driving cars). Despite the high accuracy of convolutional neural networks, they have been found to be susceptible to adversarial noise added to benign examples and out-distribution samples that are classified confidently into in-distribution classes. The applications of CNNs in surveillance services necessitate the need for secure and robust CNNs. On the other hand, despite the benefits of CNNs to surveillance applications, they pose a privacy threat as they are able to undertake image face recognition on a large scale. Coupled with the availability of large image datasets on online social networks and at image storage providers, this poses a serious privacy threat. Emergence of Super Resolution Convolutional Neural Networks (SRCNNs) which improve the image resolution for face recognition classifiers further exacerbates this threat. In this dissertation, we address both these problems. We first propose taking advantage of CNNs vulnerability to adversarial perturbations by adding adversarial noise to images to fool CNNs to protect privacy of images in cloud image storage setting. We propose and evaluate two adversarial-based protection methods: (i) a semantic perturbation-based method called, k-Randomized Transparent Image Overlays (k-RTIO), and (ii) a learning-based method called, Universal Ensemble Perturbation (UEP). These methods can thwart unknown face recognition models (i.e. black-box) while requiring low computational resources. We then evaluate the practicality of adversarial perturbations learned for CNNs on SRCNNs and show that adversarial perturbations are transparent to SRCNNs. In the last part of our dissertation, We propose mechanisms to make CNNs robust against adversarial and out-distribution examples by rejecting suspicious inputs. In particular, we propose an Augmented CNN (A-CNN) with an extra class that is trained on limited out-distribution samples, which can improve CNNs resiliency against adversarial examples. Further, to protect pre-trained highly accurate CNNs, post-processing methods that analyze the output of intermediate layers of CNNs for distinguishing in- and out-distribution have attracted attention. we propose using adversarial profiles, perturbations that misclassify samples of a source class (not other classes) to a target class, as a post-processing step to detect out-distribution examples
Interaction analytics for automatic assessment of communication quality in primary care
Effective doctor-patient communication is a crucial element of health care, influencing patients’ personal and medical outcomes following the interview.
The set of skills used in interpersonal interaction is complex, involving verbal
and non-verbal behaviour. Precise attributes of good non-verbal behaviour
are difficult to characterise, but models and studies offer insight on relevant
factors. In this PhD, I studied how the attributes of non-verbal behaviour can
be automatically extracted and assessed, focusing on turn-taking patterns of
and the prosody of patient-clinician dialogues.
I described clinician-patient communication and the tools and methods used to
train and assess communication during the consultation. I then proceeded to
a review of the literature on the existing efforts to automate assessment, depicting an emerging domain focused on the semantic content of the exchange
and a lack of investigation on interaction dynamics, notably on the structure of
turns and prosody.
To undertake the study of these aspects, I initially planned the collection of
data. I underlined the need for a system that follows the requirements of sensitive data collection regarding data quality and security. I went on to design a
secure system which records participants’ speech as well as the body posture
of the clinician. I provided an open-source implementation and I supported its
use by the scientific community.
I investigated the automatic extraction and analysis of some non-verbal components of the clinician-patient communication on an existing corpus of GP
consultations. I outlined different patterns in the clinician-patient interaction
and I further developed explanations of known consulting behaviours, such as
the general imbalance of the doctor-patient interaction and differences in the
control of the conversation.
I compared behaviours present in face to face, telephone, and video consultations, finding overall similarities alongside noticeable differences in patterns of
overlapping speech and switching behaviour.
I further studied non-verbal signals by analysing speech prosodic features, investigating differences in participants’ behaviour and relations between the assessment of the clinician-patient communication and prosodic features. While
limited in their interpretative power on the explored dataset, these signals
nonetheless provide additional metrics to identify and characterise variations
in the non-verbal behaviour of the participants.
Analysing clinician-patient communication is difficult even for human experts.
Automating that process in this work has been particularly challenging. I demonstrated the capacity of automated processing of non-verbal behaviours to analyse clinician-patient communication. I outlined the ability to explore new aspects, interaction dynamics, and objectively describe how patients and clinicians interact. I further explained known aspects such as clinician dominance
in more detail. I also provided a methodology to characterise participants’ turns
taking behaviour and speech prosody for the objective appraisal of the quality of non-verbal communication. This methodology is aimed at further use in
research and education