1,013 research outputs found

    The Dark Side(-Channel) of Mobile Devices: A Survey on Network Traffic Analysis

    Full text link
    In recent years, mobile devices (e.g., smartphones and tablets) have met an increasing commercial success and have become a fundamental element of the everyday life for billions of people all around the world. Mobile devices are used not only for traditional communication activities (e.g., voice calls and messages) but also for more advanced tasks made possible by an enormous amount of multi-purpose applications (e.g., finance, gaming, and shopping). As a result, those devices generate a significant network traffic (a consistent part of the overall Internet traffic). For this reason, the research community has been investigating security and privacy issues that are related to the network traffic generated by mobile devices, which could be analyzed to obtain information useful for a variety of goals (ranging from device security and network optimization, to fine-grained user profiling). In this paper, we review the works that contributed to the state of the art of network traffic analysis targeting mobile devices. In particular, we present a systematic classification of the works in the literature according to three criteria: (i) the goal of the analysis; (ii) the point where the network traffic is captured; and (iii) the targeted mobile platforms. In this survey, we consider points of capturing such as Wi-Fi Access Points, software simulation, and inside real mobile devices or emulators. For the surveyed works, we review and compare analysis techniques, validation methods, and achieved results. We also discuss possible countermeasures, challenges and possible directions for future research on mobile traffic analysis and other emerging domains (e.g., Internet of Things). We believe our survey will be a reference work for researchers and practitioners in this research field.Comment: 55 page

    Analysis and evaluation of SafeDroid v2.0, a framework for detecting malicious Android applications

    Get PDF
    Android smartphones have become a vital component of the daily routine of millions of people, running a plethora of applications available in the official and alternative marketplaces. Although there are many security mechanisms to scan and filter malicious applications, malware is still able to reach the devices of many end-users. In this paper, we introduce the SafeDroid v2.0 framework, that is a flexible, robust, and versatile open-source solution for statically analysing Android applications, based on machine learning techniques. The main goal of our work, besides the automated production of fully sufficient prediction and classification models in terms of maximum accuracy scores and minimum negative errors, is to offer an out-of-the-box framework that can be employed by the Android security researchers to efficiently experiment to find effective solutions: the SafeDroid v2.0 framework makes it possible to test many different combinations of machine learning classifiers, with a high degree of freedom and flexibility in the choice of features to consider, such as dataset balance and dataset selection. The framework also provides a server, for generating experiment reports, and an Android application, for the verification of the produced models in real-life scenarios. An extensive campaign of experiments is also presented to show how it is possible to efficiently find competitive solutions: the results of our experiments confirm that SafeDroid v2.0 can reach very good performances, even with highly unbalanced dataset inputs and always with a very limited overhead

    SkillVet: Automated Traceability Analysis of Amazon Alexa Skills

    Get PDF
    Third-party software, or skills, are essential components in Smart Personal Assistants (SPA). The number of skills has grown rapidly, dominated by a changing environment that has no clear business model. Skills can access personal information and this may pose a risk to users. However, there is little information about how this ecosystem works, let alone the tools that can facilitate its study. In this paper, we present the largest systematic measurement of the Amazon Alexa skill ecosystem to date. We study developers' practices in this ecosystem, including how they collect and justify the need for sensitive information, by designing a methodology to identify over-privileged skills with broken privacy policies. We collect 199,295 Alexa skills and uncover that around 43% of the skills (and 50% of the developers) that request these permissions follow bad privacy practices, including (partially) broken data permissions traceability. In order to perform this kind of analysis at scale, we present SkillVet that leverages machine learning and natural language processing techniques, and generates high-accuracy prediction sets. We report a number of concerning practices including how developers can bypass Alexa's permission system through account linking and conversational skills, and offer recommendations on how to improve transparency, privacy and security. Resulting from the responsible disclosure we have conducted,13% of the reported issues no longer pose a threat at submission time.Comment: 17pages, 8 figure

    Android security: analysis and applications

    Get PDF
    The Android mobile system is home to millions of apps that offer a wide range of functionalities. Users rely on Android apps in various facets of daily life, including critical, e.g., medical, settings. Generally, users trust that apps perform their stated purpose safely and accurately. However, despite the platform’s efforts to maintain a safe environment, apps routinely manage to evade scrutiny. This dissertation analyzes Android app behavior and has revealed several weakness: lapses in device authentication schemes, deceptive practices such as apps covering their traces, as well as behavioral and descriptive inaccuracies in medical apps. Examining a large corpus of applications has revealed that suspicious behavior is often the result of lax oversight, and can occur without an explicit intent to harm users. Nevertheless, flawed app behavior is present, and is especially problematic in apps that perform critical tasks. Additionally, manufacturer’s and app developer’s claims often do not mirror actual functionalities, e.g., as we reveal in our study of LG’s Knock Code authentication scheme, and as evidenced by the removal of Google Play medical apps due to overstated functionality claims. This dissertation makes the following contributions: (1) quantifying the security of LG’s Knock Code authentication method, (2) defining deceptive practices of self-hiding app behavior found in popular apps, (3) verifying abuses of device administrator features, (4) characterizing the medical app landscape found on Google Play, (5) detailing the claimed behaviors and conditions of medical apps using ICD codes and app descriptions, (6) verifying errors in medical score calculator app implementations, and (7) discerning how medical apps should be regulated within the jurisdiction of regulatory frameworks based on their behavior and data acquired from users

    Detecting Repackaged Android Applications Using Perceptual Hashing

    Get PDF
    The last decade has shown a steady rate of Android device dominance in market share and the emergence of hundreds of thousands of apps available to the public. Because of the ease of reverse engineering Android applications, repackaged malicious apps that clone existing code have become a severe problem in the marketplace. This research proposes a novel repackaged detection system based on perceptual hashes of vetted Android apps and their associated dynamic user interface (UI) behavior. Results show that an average hash approach produces 88% accuracy (indicating low false negative and false positive rates) in a sample set of 4878 Android apps, including 2151 repackaged apps. The approach is the first dynamic method proposed in the research community using image-based hashing techniques with reasonable performance to other known dynamic approaches and the possibility for practical implementation at scale for new applications entering the Android market

    Security, Privacy and Steganographic Analysis of FaceApp and TikTo

    Get PDF
    Article originally published in International Journal of Computer Science and SecuritySmartphone applications (Apps) can be addictive for users due to their uniqueness, ease-of-use, trendiness, and growing popularity. The addition of Artificial Intelligence (AI) into their functionality has rapidly gained popularity with smartphone users. Over the years, very few smartphone Apps have quickly gained immense popularity like FaceApp and TikTok. FaceApp boasts of using AI to transform photos of human faces using its powerful facial recognition capabilities. FaceApp has been the target of ensuing backlash against it driving the market for a number of other similar yet lesser-known clones into the top ranks of the App stores. TikTok offers video editing and sharing of short video clips whereby making them charming, funny, cringe-inducing, and addictive to the younger generation. FaceApp and TikTok have been the targets of the media, privacy watchdogs, and governments over worries of privacy, ethnicity filters, data misuse, anti-forensics, and security. In this paper, the authors forensically review FaceApp and TikTok Apps from the Android Play Store, for their data ownership, data management, privacy concerns, steganographic use, and overall security posture
    • 

    corecore