8,014 research outputs found
The Convergence of Machine Learning and Communications
The areas of machine learning and communication technology are converging.
Today's communications systems generate a huge amount of traffic data, which
can help to significantly enhance the design and management of networks and
communication components when combined with advanced machine learning methods.
Furthermore, recently developed end-to-end training procedures offer new ways
to jointly optimize the components of a communication system. Also in many
emerging application fields of communication technology, e.g., smart cities or
internet of things, machine learning methods are of central importance. This
paper gives an overview over the use of machine learning in different areas of
communications and discusses two exemplar applications in wireless networking.
Furthermore, it identifies promising future research topics and discusses their
potential impact.Comment: 8 pages, 4 figure
Privacy-Preserving SVM Computing by Using Random Unitary Transformation
A privacy-preserving Support Vector Machine (SVM) computing scheme is
proposed in this paper. Cloud computing has been spreading in many fields.
However, the cloud computing has some serious issues for end users, such as
unauthorized use and leak of data, and privacy compromise. We focus on
templates protected by using a random unitary transformation, and consider some
properties of the protected templates for secure SVM computing, where templates
mean features extracted from data. The proposed scheme enables us not only to
protect templates, but also to have the same performance as that of unprotected
templates under some useful kernel functions. Moreover, it can be directly
carried out by using well-known SVM algorithms, without preparing any
algorithms specialized for secure SVM computing. In the experiments, the
proposed scheme is applied to a face-based authentication algorithm with SVM
classifiers to confirm the effectiveness.Comment: To appear in ISPACS 201
Convolutional Neural Networks with Transformed Input based on Robust Tensor Network Decomposition
Tensor network decomposition, originated from quantum physics to model
entangled many-particle quantum systems, turns out to be a promising
mathematical technique to efficiently represent and process big data in
parsimonious manner. In this study, we show that tensor networks can
systematically partition structured data, e.g. color images, for distributed
storage and communication in privacy-preserving manner. Leveraging the sea of
big data and metadata privacy, empirical results show that neighbouring
subtensors with implicit information stored in tensor network formats cannot be
identified for data reconstruction. This technique complements the existing
encryption and randomization techniques which store explicit data
representation at one place and highly susceptible to adversarial attacks such
as side-channel attacks and de-anonymization. Furthermore, we propose a theory
for adversarial examples that mislead convolutional neural networks to
misclassification using subspace analysis based on singular value decomposition
(SVD). The theory is extended to analyze higher-order tensors using
tensor-train SVD (TT-SVD); it helps to explain the level of susceptibility of
different datasets to adversarial attacks, the structural similarity of
different adversarial attacks including global and localized attacks, and the
efficacy of different adversarial defenses based on input transformation. An
efficient and adaptive algorithm based on robust TT-SVD is then developed to
detect strong and static adversarial attacks
2P-DNN : Privacy-Preserving Deep Neural Networks Based on Homomorphic Cryptosystem
Machine Learning as a Service (MLaaS), such as Microsoft Azure, Amazon AWS,
offers an effective DNN model to complete the machine learning task for small
businesses and individuals who are restricted to the lacking data and computing
power. However, here comes an issue that user privacy is ex-posed to the MLaaS
server, since users need to upload their sensitive data to the MLaaS server. In
order to preserve their privacy, users can encrypt their data before uploading
it. This makes it difficult to run the DNN model because it is not designed for
running in ciphertext domain. In this paper, using the Paillier homomorphic
cryptosystem we present a new Privacy-Preserving Deep Neural Network model that
we called 2P-DNN. This model can fulfill the machine leaning task in ciphertext
domain. By using 2P-DNN, MLaaS is able to provide a Privacy-Preserving machine
learning ser-vice for users. We build our 2P-DNN model based on LeNet-5, and
test it with the encrypted MNIST dataset. The classification accuracy is more
than 97%, which is close to the accuracy of LeNet-5 running with the MNIST
dataset and higher than that of other existing Privacy-Preserving machine
learning model
Desensitized RDCA Subspaces for Compressive Privacy in Machine Learning
The quest for better data analysis and artificial intelligence has lead to
more and more data being collected and stored. As a consequence, more data are
exposed to malicious entities. This paper examines the problem of privacy in
machine learning for classification. We utilize the Ridge Discriminant
Component Analysis (RDCA) to desensitize data with respect to a privacy label.
Based on five experiments, we show that desensitization by RDCA can effectively
protect privacy (i.e. low accuracy on the privacy label) with small loss in
utility. On HAR and CMU Faces datasets, the use of desensitized data results in
random guess level accuracies for privacy at a cost of 5.14% and 0.04%, on
average, drop in the utility accuracies. For Semeion Handwritten Digit dataset,
accuracies of the privacy-sensitive digits are almost zero, while the
accuracies for the utility-relevant digits drop by 7.53% on average. This
presents a promising solution to the problem of privacy in machine learning for
classification
Ratio Utility and Cost Analysis for Privacy Preserving Subspace Projection
With a rapidly increasing number of devices connected to the internet, big
data has been applied to various domains of human life. Nevertheless, it has
also opened new venues for breaching users' privacy. Hence it is highly
required to develop techniques that enable data owners to privatize their data
while keeping it useful for intended applications. Existing methods, however,
do not offer enough flexibility for controlling the utility-privacy trade-off
and may incur unfavorable results when privacy requirements are high. To tackle
these drawbacks, we propose a compressive-privacy based method, namely RUCA
(Ratio Utility and Cost Analysis), which can not only maximize performance for
a privacy-insensitive classification task but also minimize the ability of any
classifier to infer private information from the data. Experimental results on
Census and Human Activity Recognition data sets demonstrate that RUCA
significantly outperforms existing privacy preserving data projection
techniques for a wide range of privacy pricings.Comment: Submitted to ICASSP 201
Image Privacy Prediction Using Deep Neural Networks
Images today are increasingly shared online on social networking sites such
as Facebook, Flickr, Foursquare, and Instagram. Despite that current social
networking sites allow users to change their privacy preferences, this is often
a cumbersome task for the vast majority of users on the Web, who face
difficulties in assigning and managing privacy settings. Thus, automatically
predicting images' privacy to warn users about private or sensitive content
before uploading these images on social networking sites has become a necessity
in our current interconnected world.
In this paper, we explore learning models to automatically predict
appropriate images' privacy as private or public using carefully identified
image-specific features. We study deep visual semantic features that are
derived from various layers of Convolutional Neural Networks (CNNs) as well as
textual features such as user tags and deep tags generated from deep CNNs.
Particularly, we extract deep (visual and tag) features from four pre-trained
CNN architectures for object recognition, i.e., AlexNet, GoogLeNet, VGG-16, and
ResNet, and compare their performance for image privacy prediction. Results of
our experiments on a Flickr dataset of over thirty thousand images show that
the learning models trained on features extracted from ResNet outperform the
state-of-the-art models for image privacy prediction. We further investigate
the combination of user tags and deep tags derived from CNN architectures using
two settings: (1) SVM on the bag-of-tags features; and (2) text-based CNN. Our
results show that even though the models trained on the visual features perform
better than those trained on the tag features, the combination of deep visual
features with image tags shows improvements in performance over the individual
feature sets
Holistic Collaborative Privacy Framework for Users' Privacy in Social Recommender Service
The current business model for existing recommender services is centered
around the availability of users' personal data at their side whereas consumers
have to trust that the recommender service providers will not use their data in
a malicious way. With the increasing number of cases for privacy breaches,
different countries and corporations have issued privacy laws and regulations
to define the best practices for the protection of personal information. The
data protection directive 95/46/EC and the privacy principles established by
the Organization for Economic Cooperation and Development (OECD) are examples
of such regulation frameworks. In this paper, we assert that utilizing
third-party recommender services to generate accurate referrals are feasible,
while preserving the privacy of the users' sensitive information which will be
residing on a clear form only on his/her own device. As a result, each user who
benefits from the third-party recommender service will have absolute control
over what to release from his/her own preferences. We proposed a collaborative
privacy middleware that executes a two stage concealment process within a
distributed data collection protocol in order to attain this claim.
Additionally, the proposed solution complies with one of the common privacy
regulation frameworks for fair information practice in a natural and functional
way -which is OECD privacy principles. The approach presented in this paper is
easily integrated into the current business model as it is implemented using a
middleware that runs at the end-users side and utilizes the social nature of
content distribution services to implement a topological data collection
protocol
Efficient Context Management and Personalized User Recommendations in a Smart Social TV environment
With the emergence of Smart TV and related interconnected devices, second
screen solutions have rapidly appeared to provide more content for end-users
and enrich their TV experience. Given the various data and sources involved -
videos, actors, social media and online databases- the aforementioned market
poses great challenges concerning user context management and sophisticated
recommendations that can be addressed to the end-users. This paper presents an
innovative Context Management model and a related first and second screen
recommendation service, based on a user-item graph analysis as well as
collaborative filtering techniques in the context of a Dynamic Social & Media
Content Syndication (SAM) platform. The model evaluation provided is based on
datasets collected online, presenting a comparative analysis concerning
efficiency and effectiveness of the current approach, and illustrating its
added value.Comment: In GECON2016, 13th International Conference on Economics of Grids,
Clouds, Systems, and Services, September 20-22, 2016, Harokopio University,
Athens, Greec
On Face Segmentation, Face Swapping, and Face Perception
We show that even when face images are unconstrained and arbitrarily paired,
face swapping between them is actually quite simple. To this end, we make the
following contributions. (a) Instead of tailoring systems for face
segmentation, as others previously proposed, we show that a standard fully
convolutional network (FCN) can achieve remarkably fast and accurate
segmentations, provided that it is trained on a rich enough example set. For
this purpose, we describe novel data collection and generation routines which
provide challenging segmented face examples. (b) We use our segmentations to
enable robust face swapping under unprecedented conditions. (c) Unlike previous
work, our swapping is robust enough to allow for extensive quantitative tests.
To this end, we use the Labeled Faces in the Wild (LFW) benchmark and measure
the effect of intra- and inter-subject face swapping on recognition. We show
that our intra-subject swapped faces remain as recognizable as their sources,
testifying to the effectiveness of our method. In line with well known
perceptual studies, we show that better face swapping produces less
recognizable inter-subject results. This is the first time this effect was
quantitatively demonstrated for machine vision systems
- …