911 research outputs found
Policy Enforcement with Proactive Libraries
Software libraries implement APIs that deliver reusable functionalities. To
correctly use these functionalities, software applications must satisfy certain
correctness policies, for instance policies about the order some API methods
can be invoked and about the values that can be used for the parameters. If
these policies are violated, applications may produce misbehaviors and failures
at runtime. Although this problem is general, applications that incorrectly use
API methods are more frequent in certain contexts. For instance, Android
provides a rich and rapidly evolving set of APIs that might be used incorrectly
by app developers who often implement and publish faulty apps in the
marketplaces. To mitigate this problem, we introduce the novel notion of
proactive library, which augments classic libraries with the capability of
proactively detecting and healing misuses at run- time. Proactive libraries
blend libraries with multiple proactive modules that collect data, check the
correctness policies of the libraries, and heal executions as soon as the
violation of a correctness policy is detected. The proactive modules can be
activated or deactivated at runtime by the users and can be implemented without
requiring any change to the original library and any knowledge about the
applications that may use the library. We evaluated proactive libraries in the
context of the Android ecosystem. Results show that proactive libraries can
automati- cally overcome several problems related to bad resource usage at the
cost of a small overhead.Comment: O. Riganelli, D. Micucci and L. Mariani, "Policy Enforcement with
Proactive Libraries" 2017 IEEE/ACM 12th International Symposium on Software
Engineering for Adaptive and Self-Managing Systems (SEAMS), Buenos Aires,
Argentina, 2017, pp. 182-19
Large-Scale Analysis of Framework-Specific Exceptions in Android Apps
Mobile apps have become ubiquitous. For app developers, it is a key priority
to ensure their apps' correctness and reliability. However, many apps still
suffer from occasional to frequent crashes, weakening their competitive edge.
Large-scale, deep analyses of the characteristics of real-world app crashes can
provide useful insights to guide developers, or help improve testing and
analysis tools. However, such studies do not exist -- this paper fills this
gap. Over a four-month long effort, we have collected 16,245 unique exception
traces from 2,486 open-source Android apps, and observed that
framework-specific exceptions account for the majority of these crashes. We
then extensively investigated the 8,243 framework-specific exceptions (which
took six person-months): (1) identifying their characteristics (e.g.,
manifestation locations, common fault categories), (2) evaluating their
manifestation via state-of-the-art bug detection techniques, and (3) reviewing
their fixes. Besides the insights they provide, these findings motivate and
enable follow-up research on mobile apps, such as bug detection, fault
localization and patch generation. In addition, to demonstrate the utility of
our findings, we have optimized Stoat, a dynamic testing tool, and implemented
ExLocator, an exception localization tool, for Android apps. Stoat is able to
quickly uncover three previously-unknown, confirmed/fixed crashes in Gmail and
Google+; ExLocator is capable of precisely locating the root causes of
identified exceptions in real-world apps. Our substantial dataset is made
publicly available to share with and benefit the community.Comment: ICSE'18: the 40th International Conference on Software Engineerin
Dependability where the mobile world meets the enterprise world
As we move toward increasingly larger scales of computing, complexity of systems and networks has increased manifold leading to massive failures of cloud providers (Amazon Cloudfront, November 2014) and geographically localized outages of cellular services (T-Mobile, June 2014). In this dissertation, we investigate the dependability aspects of two of the most prevalent computing platforms today, namely, smartphones and cloud computing. These two seemingly disparate platforms are part of a cohesive story—they interact to provide end-to-end services which are increasingly being delivered over mobile platforms, examples being iCloud, Google Drive and their smartphone counterparts iPhone and Android. ^ In one of the early work on characterizing failures in dominant mobile OSes, we analyzed bug repositories of Android and Symbian and found similarities in their failure modes [ISSRE2010]. We also presented a classification of root causes and quantified the impact of ease of customizing the smartphones on system reliability. Our evaluation of Inter-Component Communication in Android [DSN2012] show an alarming number of exception handling errors where a phone may be crashed by passing it malformed component invocation messages, even from unprivileged applications. In this work, we also suggest language extensions that can mitigate these problems. ^ Mobile applications today are increasingly being used to interact with enterprise-class web services commonly hosted in virtualized environments. Virutalization suffers from the problem of imperfect performance isolation where contention for low-level hardware resources can impact application performance. Through a set of rigorous experiments in a private cloud testbed and in EC2, we show that interference induced performance degradation is a reality. Our experiments have also shown that optimal configuration settings for web servers change during such phases of interference. Based on this observation, we design and implement the IC 2engine which can mitigate effects of interference by reconfiguring web server parameters [MW2014]. We further improve IC 2 by incorporating it into a two-level configuration engine, named ICE, for managing web server clusters [ICAC2015]. Our evaluations show that, compared to an interference agnostic configuration, IC 2 can improve response time of web servers by upto 40%, while ICE can improve response time by up to 94% during phases of interference
Efficiency Matters: Speeding Up Automated Testing with GUI Rendering Inference
Due to the importance of Android app quality assurance, many automated GUI
testing tools have been developed. Although the test algorithms have been
improved, the impact of GUI rendering has been overlooked. On the one hand,
setting a long waiting time to execute events on fully rendered GUIs slows down
the testing process. On the other hand, setting a short waiting time will cause
the events to execute on partially rendered GUIs, which negatively affects the
testing effectiveness. An optimal waiting time should strike a balance between
effectiveness and efficiency. We propose AdaT, a lightweight image-based
approach to dynamically adjust the inter-event time based on GUI rendering
state. Given the real-time streaming on the GUI, AdaT presents a deep learning
model to infer the rendering state, and synchronizes with the testing tool to
schedule the next event when the GUI is fully rendered. The evaluations
demonstrate the accuracy, efficiency, and effectiveness of our approach. We
also integrate our approach with the existing automated testing tool to
demonstrate the usefulness of AdaT in covering more activities and executing
more events on fully rendered GUIs.Comment: Proceedings of the 45th International Conference on Software
Engineerin
Quire: Lightweight Provenance for Smart Phone Operating Systems
Smartphone apps often run with full privileges to access the network and
sensitive local resources, making it difficult for remote systems to have any
trust in the provenance of network connections they receive. Even within the
phone, different apps with different privileges can communicate with one
another, allowing one app to trick another into improperly exercising its
privileges (a Confused Deputy attack). In Quire, we engineered two new security
mechanisms into Android to address these issues. First, we track the call chain
of IPCs, allowing an app the choice of operating with the diminished privileges
of its callers or to act explicitly on its own behalf. Second, a lightweight
signature scheme allows any app to create a signed statement that can be
verified anywhere inside the phone. Both of these mechanisms are reflected in
network RPCs, allowing remote systems visibility into the state of the phone
when an RPC is made. We demonstrate the usefulness of Quire with two example
applications. We built an advertising service, running distinctly from the app
which wants to display ads, which can validate clicks passed to it from its
host. We also built a payment service, allowing an app to issue a request which
the payment service validates with the user. An app cannot not forge a payment
request by directly connecting to the remote server, nor can the local payment
service tamper with the request
How Effective are Smart Contract Analysis Tools? Evaluating Smart Contract Static Analysis Tools Using Bug Injection
Security attacks targeting smart contracts have been on the rise, which have
led to financial loss and erosion of trust. Therefore, it is important to
enable developers to discover security vulnerabilities in smart contracts
before deployment. A number of static analysis tools have been developed for
finding security bugs in smart contracts. However, despite the numerous
bug-finding tools, there is no systematic approach to evaluate the proposed
tools and gauge their effectiveness. This paper proposes SolidiFI, an automated
and systematic approach for evaluating smart contract static analysis tools.
SolidiFI is based on injecting bugs (i.e., code defects) into all potential
locations in a smart contract to introduce targeted security vulnerabilities.
SolidiFI then checks the generated buggy contract using the static analysis
tools, and identifies the bugs that the tools are unable to detect
(false-negatives) along with identifying the bugs reported as false-positives.
SolidiFI is used to evaluate six widely-used static analysis tools, namely,
Oyente, Securify, Mythril, SmartCheck, Manticore and Slither, using a set of 50
contracts injected by 9369 distinct bugs. It finds several instances of bugs
that are not detected by the evaluated tools despite their claims of being able
to detect such bugs, and all the tools report many false positivesComment: ISSTA 202
- …