16,894 research outputs found
Using Hover to Compromise the Confidentiality of User Input on Android
We show that the new hover (floating touch) technology, available in a number
of today's smartphone models, can be abused by any Android application running
with a common SYSTEM_ALERT_WINDOW permission to record all touchscreen input
into other applications. Leveraging this attack, a malicious application
running on the system is therefore able to profile user's behavior, capture
sensitive input such as passwords and PINs as well as record all user's social
interactions. To evaluate our attack we implemented Hoover, a proof-of-concept
malicious application that runs in the system background and records all input
to foreground applications. We evaluated Hoover with 40 users, across two
different Android devices and two input methods, stylus and finger. In the case
of touchscreen input by finger, Hoover estimated the positions of users' clicks
within an error of 100 pixels and keyboard input with an accuracy of 79%.
Hoover captured users' input by stylus even more accurately, estimating users'
clicks within 2 pixels and keyboard input with an accuracy of 98%. We discuss
ways of mitigating this attack and show that this cannot be done by simply
restricting access to permissions or imposing additional cognitive load on the
users since this would significantly constrain the intended use of the hover
technology.Comment: 11 page
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
With the widespread use of machine learning (ML) techniques, ML as a service
has become increasingly popular. In this setting, an ML model resides on a
server and users can query it with their data via an API. However, if the
user's input is sensitive, sending it to the server is undesirable and
sometimes even legally not possible. Equally, the service provider does not
want to share the model by sending it to the client for protecting its
intellectual property and pay-per-query business model.
In this paper, we propose MLCapsule, a guarded offline deployment of machine
learning as a service. MLCapsule executes the model locally on the user's side
and therefore the data never leaves the client. Meanwhile, MLCapsule offers the
service provider the same level of control and security of its model as the
commonly used server-side execution. In addition, MLCapsule is applicable to
offline applications that require local execution. Beyond protecting against
direct model access, we couple the secure offline deployment with defenses
against advanced attacks on machine learning models such as model stealing,
reverse engineering, and membership inference
Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts
We introduce an adversarial method for producing high-recall explanations of
neural text classifier decisions. Building on an existing architecture for
extractive explanations via hard attention, we add an adversarial layer which
scans the residual of the attention for remaining predictive signal. Motivated
by the important domain of detecting personal attacks in social media comments,
we additionally demonstrate the importance of manually setting a semantically
appropriate `default' behavior for the model by explicitly manipulating its
bias term. We develop a validation set of human-annotated personal attacks to
evaluate the impact of these changes.Comment: Accepted to EMNLP 2018 Code and data available at
https://github.com/shcarton/rcn
Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers
In this paper, we present a black-box attack against API call based machine
learning malware classifiers, focusing on generating adversarial sequences
combining API calls and static features (e.g., printable strings) that will be
misclassified by the classifier without affecting the malware functionality. We
show that this attack is effective against many classifiers due to the
transferability principle between RNN variants, feed forward DNNs, and
traditional machine learning classifiers such as SVM. We also implement GADGET,
a software framework to convert any malware binary to a binary undetected by
malware classifiers, using the proposed attack, without access to the malware
source code.Comment: Accepted as a conference paper at RAID 201
- …