21 research outputs found
Active Learning of Points-To Specifications
When analyzing programs, large libraries pose significant challenges to
static points-to analysis. A popular solution is to have a human analyst
provide points-to specifications that summarize relevant behaviors of library
code, which can substantially improve precision and handle missing code such as
native code. We propose ATLAS, a tool that automatically infers points-to
specifications. ATLAS synthesizes unit tests that exercise the library code,
and then infers points-to specifications based on observations from these
executions. ATLAS automatically infers specifications for the Java standard
library, and produces better results for a client static information flow
analysis on a benchmark of 46 Android apps compared to using existing
handwritten specifications
XSS Vulnerabilities in Cloud-Application Add-Ons
Cloud-application add-ons are microservices that extend the functionality of
the core applications. Many application vendors have opened their APIs for
third-party developers and created marketplaces for add-ons (also add-ins or
apps). This is a relatively new phenomenon, and its effects on the application
security have not been widely studied. It seems likely that some of the add-ons
have lower code quality than the core applications themselves and, thus, may
bring in security vulnerabilities. We found that many such add-ons are
vulnerable to cross-site scripting (XSS). The attacker can take advantage of
the document-sharing and messaging features of the cloud applications to send
malicious input to them. The vulnerable add-ons then execute client-side
JavaScript from the carefully crafted malicious input. In a major analysis
effort, we systematically studied 300 add-ons for three popular application
suites, namely Microsoft Office Online, G Suite and Shopify, and discovered a
significant percentage of vulnerable add-ons in each marketplace. We present
the results of this study, as well as analyze the add-on architectures to
understand how the XSS vulnerabilities can be exploited and how the threat can
be mitigated
The Most Dangerous Code in your Browser
Browser extensions are ubiquitous.Yet, in today\u27s browsers, extensions are the most dangerous code to userprivacy.Extensions are third-party code, like web applications, but run withelevated privileges.Even worse, existing browser extension systems give users a false senseof security by considering extensions to be more trustworthy than webapplications.This is because the user typically has to explicitly grant the extensiona series of permissions it requests, e.g., to access the current tabor a particular website.Unfortunately, extensions developers do not request minimum privilegesand users have become desensitized to install-time warnings.Furthermore, permissions offered by popular browsers are very broad andvague.For example, over 71% of the top-500 Chrome extensions can triviallyleak the user\u27s data from any site.In this paper, we argue for new extension system design, based onmandatory access control, that protects the user\u27s privacy frommalicious extensions.A system employing this design can enable a range of common extensionsto be considered safe, i.e., they do not require userpermissions and can be ensured to not leak information,while allowing the user to share information when desired.Importantly, such a design can make permission requests a rarity andthus more meaningful
The Most Dangerous Code in your Browser
Browser extensions are ubiquitous.
Yet, in today's browsers, extensions are the most dangerous code to user
privacy.
Extensions are third-party code, like web applications, but run with
elevated privileges.
Even worse, existing browser extension systems give users a false sense
of security by considering extensions to be more trustworthy than web
applications.
This is because the user typically has to explicitly grant the extension
a series of permissions it requests, e.g., to access the current tab
or a particular website.
Unfortunately, extensions developers do not request minimum privileges
and users have become desensitized to install-time warnings.
Furthermore, permissions offered by popular browsers are very broad and
vague.
For example, over 71% of the top-500 Chrome extensions can trivially
leak the user's data from any site.
In this paper, we argue for new extension system design, based on
mandatory access control, that protects the user's privacy from
malicious extensions.
A system employing this design can enable a range of common extensions
to be considered safe, i.e., they do not require user
permissions and can be ensured to not leak information,
while allowing the user to share information when desired.
Importantly, such a design can make permission requests a rarity and
thus more meaningful
IFC Inside: Retrofitting Languages with Dynamic Information Flow Control
Many important security problems in JavaScript, such as
browser extension security, untrusted JavaScript libraries and safe integration
of mutually distrustful websites (mash-ups), may be effectively
addressed using an efficient implementation of information flow control
(IFC). Unfortunately existing fine-grained approaches to JavaScript IFC
require modifications to the language semantics and its engine, a non-goal
for browser applications. In this work, we take the ideas of coarse-grained
dynamic IFC and provide the theoretical foundation for a language-based
approach that can be applied to any programming language for which external
effects can be controlled. We then apply this formalism to server and
client-side JavaScript, show how it generalizes to the C programming
language, and connect it to the Haskell LIO system. Our methodology
offers design principles for the construction of information flow control
systems when isolation can easily be achieved, as well as compositional
proofs for optimized concrete implementations of these systems, by relating
them to their isolated variants
Verification Condition Generation for Permission Logics with Abstraction Functions
Abstract predicates are the primary abstraction mechanism for program logics based on access permissions, such as separation logic and implicit dynamic frames. In addition to abstract predicates, it is often useful to also support classical abstraction functions, for instance, to encode side-effect free methods of the program and use them in specifications. However, combining abstract predicates and abstraction functions in a verification condition generator leads to subtle interactions, which complicate reasoning about heap modifications. Such complications may compromise soundness or cause divergence of the prover in the context of automated verification. In this paper, we present an encoding of abstract predicates and abstraction functions in the verification condition generator Boogie. Our encoding is sound and handles recursion in a way that is suitable for automatic verification using SMT solvers. It is implemented in the automatic verifier Chalice