67 research outputs found
Analyzing Android Browser Apps for file:// Vulnerabilities
Securing browsers in mobile devices is very challenging, because these
browser apps usually provide browsing services to other apps in the same
device. A malicious app installed in a device can potentially obtain sensitive
information through a browser app. In this paper, we identify four types of
attacks in Android, collectively known as FileCross, that exploits the
vulnerable file:// to obtain users' private files, such as cookies, bookmarks,
and browsing histories. We design an automated system to dynamically test 115
browser apps collected from Google Play and find that 64 of them are vulnerable
to the attacks. Among them are the popular Firefox, Baidu and Maxthon browsers,
and the more application-specific ones, including UC Browser HD for tablet
users, Wikipedia Browser, and Kids Safe Browser. A detailed analysis of these
browsers further shows that 26 browsers (23%) expose their browsing interfaces
unintentionally. In response to our reports, the developers concerned promptly
patched their browsers by forbidding file:// access to private file zones,
disabling JavaScript execution in file:// URLs, or even blocking external
file:// URLs. We employ the same system to validate the ten patches received
from the developers and find one still failing to block the vulnerability.Comment: The paper has been accepted by ISC'14 as a regular paper (see
https://daoyuan14.github.io/). This is a Technical Report version for
referenc
Towards understanding Android system vulnerabilities: Techniques and insights
National Research Foundation (NRF) Singapor
On the Feasibility of Specialized Ability Stealing for Large Language Code Models
Recent progress in large language code models (LLCMs) has led to a dramatic
surge in the use of software development. Nevertheless, it is widely known that
training a well-performed LLCM requires a plethora of workforce for collecting
the data and high quality annotation. Additionally, the training dataset may be
proprietary (or partially open source to the public), and the training process
is often conducted on a large-scale cluster of GPUs with high costs. Inspired
by the recent success of imitation attacks in stealing computer vision and
natural language models, this work launches the first imitation attack on
LLCMs: by querying a target LLCM with carefully-designed queries and collecting
the outputs, the adversary can train an imitation model that manifests close
behavior with the target LLCM. We systematically investigate the effectiveness
of launching imitation attacks under different query schemes and different LLCM
tasks. We also design novel methods to polish the LLCM outputs, resulting in an
effective imitation training process. We summarize our findings and provide
lessons harvested in this study that can help better depict the attack surface
of LLCMs. Our research contributes to the growing body of knowledge on
imitation attacks and defenses in deep neural models, particularly in the
domain of code related tasks.Comment: 11 page
- …