3,140 research outputs found

    Android security: analysis and applications

    Get PDF
    The Android mobile system is home to millions of apps that offer a wide range of functionalities. Users rely on Android apps in various facets of daily life, including critical, e.g., medical, settings. Generally, users trust that apps perform their stated purpose safely and accurately. However, despite the platform’s efforts to maintain a safe environment, apps routinely manage to evade scrutiny. This dissertation analyzes Android app behavior and has revealed several weakness: lapses in device authentication schemes, deceptive practices such as apps covering their traces, as well as behavioral and descriptive inaccuracies in medical apps. Examining a large corpus of applications has revealed that suspicious behavior is often the result of lax oversight, and can occur without an explicit intent to harm users. Nevertheless, flawed app behavior is present, and is especially problematic in apps that perform critical tasks. Additionally, manufacturer’s and app developer’s claims often do not mirror actual functionalities, e.g., as we reveal in our study of LG’s Knock Code authentication scheme, and as evidenced by the removal of Google Play medical apps due to overstated functionality claims. This dissertation makes the following contributions: (1) quantifying the security of LG’s Knock Code authentication method, (2) defining deceptive practices of self-hiding app behavior found in popular apps, (3) verifying abuses of device administrator features, (4) characterizing the medical app landscape found on Google Play, (5) detailing the claimed behaviors and conditions of medical apps using ICD codes and app descriptions, (6) verifying errors in medical score calculator app implementations, and (7) discerning how medical apps should be regulated within the jurisdiction of regulatory frameworks based on their behavior and data acquired from users

    After Over-Privileged Permissions: Using Technology and Design to Create Legal Compliance

    Get PDF
    Consumers in the mobile ecosystem can putatively protect their privacy with the use of application permissions. However, this requires the mobile device owners to understand permissions and their privacy implications. Yet, few consumers appreciate the nature of permissions within the mobile ecosystem, often failing to appreciate the privacy permissions that are altered when updating an app. Even more concerning is the lack of understanding of the wide use of third-party libraries, most which are installed with automatic permissions, that is permissions that must be granted to allow the application to function appropriately. Unsurprisingly, many of these third-party permissions violate consumers’ privacy expectations and thereby, become “over-privileged” to the user. Consequently, an obscurity of privacy expectations between what is practiced by the private sector and what is deemed appropriate by the public sector is exhibited. Despite the growing attention given to privacy in the mobile ecosystem, legal literature has largely ignored the implications of mobile permissions. This article seeks to address this omission by analyzing the impacts of mobile permissions and the privacy harms experienced by consumers of mobile applications. The authors call for the review of industry self-regulation and the overreliance upon simple notice and consent. Instead, the authors set out a plan for greater attention to be paid to socio-technical solutions, focusing on better privacy protections and technology embedded within the automatic permission-based application ecosystem

    Relationship Between Evidence Requirements, User Expectations, and Actual Experiences : Usability Evaluation of the Twazon Arabic Weight Loss App

    Get PDF
    Acknowledgments: This research project was supported by a grant from the Research Center of the Female Scientific and Medical Colleges, Deanship of Scientific Research, King Saud University, Riyadh, Saudi Arabia.Peer reviewedPublisher PD

    Explainable AI for Android Malware Detection: Towards Understanding Why the Models Perform So Well?

    Full text link
    Machine learning (ML)-based Android malware detection has been one of the most popular research topics in the mobile security community. An increasing number of research studies have demonstrated that machine learning is an effective and promising approach for malware detection, and some works have even claimed that their proposed models could achieve 99\% detection accuracy, leaving little room for further improvement. However, numerous prior studies have suggested that unrealistic experimental designs bring substantial biases, resulting in over-optimistic performance in malware detection. Unlike previous research that examined the detection performance of ML classifiers to locate the causes, this study employs Explainable AI (XAI) approaches to explore what ML-based models learned during the training process, inspecting and interpreting why ML-based malware classifiers perform so well under unrealistic experimental settings. We discover that temporal sample inconsistency in the training dataset brings over-optimistic classification performance (up to 99\% F1 score and accuracy). Importantly, our results indicate that ML models classify malware based on temporal differences between malware and benign, rather than the actual malicious behaviors. Our evaluation also confirms the fact that unrealistic experimental designs lead to not only unrealistic detection performance but also poor reliability, posing a significant obstacle to real-world applications. These findings suggest that XAI approaches should be used to help practitioners/researchers better understand how do AI/ML models (i.e., malware detection) work -- not just focusing on accuracy improvement.Comment: Accepted by the 33rd IEEE International Symposium on Software Reliability Engineering (ISSRE 2022

    The Privacy Paradox - investigating people's attitude towards privacy in a time of COVID-19

    Get PDF
    The advent of digital technologies used as a mechanism to deal with the Covid-19 global pandemic, has raised serious concerns around privacy and security issues. Despite these concerns and the potential risk of data misuse, including third party use, countries around the world have pushed the use and proliferation of contact-tracing applications. However, the success of these contact-tracing applications relies on their adoption and use. A well known phenomenon referred to as privacy paradox is defined as the discrepancy between the expressed privacy concern and the actual behaviour of users when it comes to protect their privacy. In this context, this paper presents a study investigating the privacy paradox in the context of a global pandemic. A national survey has been conducted and the data is analysed to examine people's privacy risk perception. The results show inconsistencies between people's privacy concerns and their actual behaviour that is reflected in their attitude shift of sharing their mobile data during a global pandemic. The study also compiles a list of recommendations for policymakers
    corecore