15 research outputs found

    Negative Results on Mining Crypto-API Usage Rules in Android Apps

    Get PDF
    Android app developers recurrently use crypto-APIs to provide data security to app users. Unfortunately, misuse of APIs only creates an illusion of security and even exposes apps to systematic attacks. It is thus necessary to provide developers with a statically-enforceable list of specifications of crypto-API usage rules. On the one hand, such rules cannot be manually written as the process does not scale to all available APIs. On the other hand, a classical mining approach based on common usage patterns is not relevant in Android, given that a large share of usages include mistakes. In this work, building on the assumption that “developers update API usage instances to fix misuses”, we propose to mine a large dataset of updates within about 40 000 real-world app lineages to infer API usage rules. Eventually, our investigations yield negative results on our assumption that API usage updates tend to correct misuses. Actually, it appears that updates that fix misuses may be unintentional: the same misuses patterns are quickly re-introduced by subsequent updates

    Mining Android Crash Fixes in the Absence of Issue- and Change-Tracking Systems

    Get PDF
    Android apps are prone to crash. This often arises from the misuse of Android framework APIs, making it harder to debug since official Android documentation does not discuss thoroughly potential exceptions.Recently, the program repair community has also started to investigate the possibility to fix crashes automatically. Current results, however, apply to limited example cases. In both scenarios of repair, the main issue is the need for more example data to drive the fix processes due to the high cost in time and effort needed to collect and identify fix examples. We propose in this work a scalable approach, CraftDroid, to mine crash fixes by leveraging a set of 28 thousand carefully reconstructed app lineages from app markets, without the need for the app source code or issue reports. We developed a replicative testing approach that locates fixes among app versions which output different runtime logs with the exact same test inputs. Overall, we have mined 104 relevant crash fixes, further abstracted 17 fine-grained fix templates that are demonstrated to be effective for patching crashed apks. Finally, we release ReCBench, a benchmark consisting of 200 crashed apks and the crash replication scripts, which the community can explore for evaluating generated crash-inducing bug patches

    App Review Driven Collaborative Bug Finding

    Full text link
    Software development teams generally welcome any effort to expose bugs in their code base. In this work, we build on the hypothesis that mobile apps from the same category (e.g., two web browser apps) may be affected by similar bugs in their evolution process. It is therefore possible to transfer the experience of one historical app to quickly find bugs in its new counterparts. This has been referred to as collaborative bug finding in the literature. Our novelty is that we guide the bug finding process by considering that existing bugs have been hinted within app reviews. Concretely, we design the BugRMSys approach to recommend bug reports for a target app by matching historical bug reports from apps in the same category with user app reviews of the target app. We experimentally show that this approach enables us to quickly expose and report dozens of bugs for targeted apps such as Brave (web browser app). BugRMSys's implementation relies on DistilBERT to produce natural language text embeddings. Our pipeline considers similarities between bug reports and app reviews to identify relevant bugs. We then focus on the app review as well as potential reproduction steps in the historical bug report (from a same-category app) to reproduce the bugs. Overall, after applying BugRMSys to six popular apps, we were able to identify, reproduce and report 20 new bugs: among these, 9 reports have been already triaged, 6 were confirmed, and 4 have been fixed by official development teams, respectively

    Anchor: Locating Android Framework-specific Crashing Faults

    Get PDF
    Android framework-specific app crashes are hard to debug. Indeed, the callback-based event-driven mechanism of Android challenges crash localization techniques that are developed for traditional Java programs. The key challenge stems from the fact that the buggy code location may not even be listed within the stack trace. For example, our empirical study on 500 framework-specific crashes from an open benchmark has revealed that 37 percent of the crash types are related to bugs that are outside the stack traces. Moreover, Android programs are a mixture of code and extra-code artifacts such as the Manifest file. The fact that any artifact can lead to failures in the app execution creates the need to position the localization target beyond the code realm. In this paper, we propose Anchor, a two-phase suspicious bug location suggestion tool. Anchor specializes in finding crash-inducing bugs outside the stack trace. Anchor is lightweight and source code independent since it only requires the crash message and the apk file to locate the fault. Experimental results, collected via cross-validation and in-the-wild dataset evaluation, show that Anchor is effective in locating Android framework-specific crashing faults.Comment: 12 page

    A First Look at Security Risks of Android TV Apps

    Get PDF
    In this paper, we present to the community the first preliminary study on the security risks of Android TV apps. To the best of our knowledge, despite the fact that various efforts have been put into analyzing Android apps, our community has not explored TV versions of Android apps. There is hence no publicly available dataset containing Android TV apps. To this end, We start by collecting a large set of Android TV apps from the official Google Play store. We then experimentally look at those apps from four security aspects: VirusTotal scans, requested permissions, security flaws, and privacy leaks. Our experimental results reveal that, similar to that of Android smartphone apps, Android TV apps can also come with different security issues. We hence argue that our community should pay more attention to analyze Android TV apps

    Should You Consider Adware as Malware in Your Study?

    Get PDF
    Empirical validations of research approaches eventually require a curated ground truth. In studies related to Android malware, such a ground truth is built by leveraging Anti-Virus (AV) scanning reports which are often provided free through online services such as VirusTotal. Unfortunately, these reports do not offer precise information for appropriately and uniquely assigning classes to samples in app datasets: AV engines indeed do not have a consensus on specifying information in labels. Furthermore, labels often mix information related to families, types, etc. In particular, the notion of “adware” is currently blurry when it comes to maliciousness. There is thus a need to thoroughly investigate cases where adware samples can actually be associated with malware (e.g., because they are tagged as adware but could be considered as malware as well). In this work, we present a large-scale analytical study of Android adware samples to quantify to what extent “adware should be considered as malware”. Our analysis is based on the Androzoo repository of 5 million apps with associated AV labels and leverages a state-of-the-art label harmonization tool to infer the malicious type of apps before confronting it against the ad families that each adware app is associated with. We found that all adware families include samples that are actually known to implement specific malicious behavior types. Up to 50% of samples in an ad family could be flagged as malicious. Overall the study demonstrates that adware is not necessarily benign

    Automated Testing of Android Apps: A Systematic Literature Review

    Get PDF
    Automated testing of Android apps is essential for app users, app developers and market maintainer communities alike. Given the widespread adoption of Android and the specificities of its development model, the literature has proposed various testing approaches for ensuring that not only functional requirements but also non-functional requirements are satisfied. In this paper, we aim at providing a clear overview of the state-of-the-art works around the topic of Android app testing, in an attempt to highlight the main trends, pinpoint the main methodologies applied and enumerate the challenges faced by the Android testing approaches as well as the directions where the community effort is still needed. To this end, we conduct a Systematic Literature Review (SLR) during which we eventually identified 103 relevant research papers published in leading conferences and journals until 2016. Our thorough examination of the relevant literature has led to several findings and highlighted the challenges that Android testing researchers should strive to address in the future. After that, we further propose a few concrete research directions where testing approaches are needed to solve recurrent issues in app updates, continuous increases of app sizes, as well as the Android ecosystem fragmentation

    Taming Android App Crashes

    Get PDF
    App crashes constitute an important deterrence for app adoption in the android ecosystem. Yet, Android app developers are challenged by the limitation of test automation tools to ensure that released apps are free from crashes. In recent years, researchers have proposed various automation approaches in the literature. Unfortunately, the practical value of these approaches have not yet been confirmed by practitioner adoption. Furthermore, existing approaches target a variety of test needs which are relevant to different sets of problems, without being specific to app crashes. Resolving app crashes implies a chain of actions starting with their reproduction, followed by the associated fault localization, before any repair can be attempted. Each action however, is challenged by the specificity of Android. In particular, some specific mechanisms (e.g., callback methods, multiple entry points, etc.) of Android apps require Android-tailored crash-inducing bug locators. Therefore, to tame Android app crashes, practitioners are in need of automation tools that are adapted to the challenges that they pose. In this respect, a number of building blocks must be designed to deliver a comprehensive toolbox. First, the community lacks well-defined, large-scale datasets of real-world app crashes that are reproducible to enable the inference of valuable insights, and facilitate experimental validations of literature approaches. Second, although bug localization from crash information is relatively mature in the realm of Java, state-of-the-art techniques are generally ineffective for Android apps due to the specificity of the Android system. Third, given the recurrence of crashes and the substantial burden that they incur for practitioners to resolve them, there is a need for methods and techniques to accelerate fixing, for example, towards implementing Automated Program Repair (APR). Finally, the above chain of actions is for curative purposes. Indeed, this "reproduction, localization, and repair" chain aims at correcting bugs in released apps. Preventive approaches, i.e., approaches that help developers to reduce the likelihood of releasing crashing apps, are still absent. In the Android ecosystem, developers are challenged by the lack of detailed documentation about the complex Android framework API they use to develop their apps. For example, developers need support for precisely identifying which exceptions may be triggered by APIs. Such support can further alleviate the challenge related to the fact that the condition under which APIs are triggered are often not documented. In this context, the present dissertation aims to tame Android crashes by contributing to the following four building blocks: Systematic Literature Review on automated app testing approaches: We aim at providing a clear overview of the state-of-the-art works around the topic of Android app testing, in an attempt to highlight the main trends, pinpoint the main methodologies applied and enumerate the challenges faced by the Android testing approaches as well as the directions where the community effort is still needed. To this end, we conduct a Systematic Literature Review (SLR) during which we eventually identified 103 relevant research papers published in leading conferences and journals until 2016. Our thorough examination of the relevant literature has led to several findings and highlighted the challenges that Android testing researchers should strive to address in the future. After that, we further propose a few concrete research directions where testing approaches are needed to solve recurrent issues in app updates, continuous increases of app sizes, as well as the Android ecosystem fragmentation. Locating Android app crash-inducing bugs: We perform an empirical study on 500 framework-specific crashes from an open benchmark. This study reveals that 37 percent of the crash types are related to bugs that are outside the crash stack traces. Moreover, Android programs are a mixture of code and extra-code artifacts such as the Manifest file. The fact that any artifact can lead to failures in the app execution creates the need to position the localization target beyond the code realm. We propose ANCHOR, a two-phase suspicious bug location suggestion tool. ANCHOR specializes in finding crash-inducing bugs outside the stack trace. ANCHOR is lightweight and source code independent since it only requires the crash message and the apk file to locate the fault. Experimental results, collected via cross-validation and in-the-wild dataset evaluation, show that ANCHOR is effective in locating Android framework-specific crashing faults. Mining Android app crash fix templates: We propose a scalable approach, CraftDroid, to mine crash fixes by leveraging a set of 28 thousand carefully reconstructed app lineages from app markets, without the need for the app source code or issue reports. We develop a replicative testing approach that locates fixes among app versions which output different runtime logs with the exact same test inputs. Overall, we have mined 104 relevant crash fixes, further abstracted 17 fine-grained fix templates that are demonstrated to be effective for patching crashed apks. Finally, we release ReCBench, a benchmark consisting of 200 crashed apks and the crash replication scripts, which the community can explore for evaluating generated crash-inducing bug patches. Documenting framework APIs' unchecked exceptions: We propose Afuera, an automated tool that profiles Android framework APIs and provides information on when they can potentially trigger unchecked exceptions. Afuera relies on a static-analysis approach and a dedicated algorithm to examine the entire Android framework. With Afuera, we confirmed that 26739 unique unchecked exception instances may be triggered by invoking 5467 (24%) Android framework APIs. Afuera further analyzes the Android framework to inform about which parameter(s) of an API method can potentially be the cause of the triggering of an unchecked exception. To that end, Afuera relies on fully automated instrumentation and taint analysis techniques. Afuera is run to analyze 50 randomly sampled APIs to demonstrate its effectiveness.Evaluation results suggest that Afuera has perfect true positive rate. However, Afuera is affected by false negatives due to the limitation of state-of-the-art taint analysis techniques
    corecore