3,653 research outputs found
Measuring third party tracker power across web and mobile
Third-party networks collect vast amounts of data about users via web sites
and mobile applications. Consolidations among tracker companies can
significantly increase their individual tracking capabilities, prompting
scrutiny by competition regulators. Traditional measures of market share, based
on revenue or sales, fail to represent the tracking capability of a tracker,
especially if it spans both web and mobile. This paper proposes a new approach
to measure the concentration of tracking capability, based on the reach of a
tracker on popular websites and apps. Our results reveal that tracker
prominence and parent-subsidiary relationships have significant impact on
accurately measuring concentration
Designing a comprehensive security framework for smartphones and mobile devices
This work investigates issues and challenges of cyber security, specifically malware targeting mobile devices. Recent advances in technology have provided high CPU power, large storage, broad bandwidth and integrated peripheral devices such as Bluetooth, Wi-Fi, 3G/4G to mobile devices, making them popular computing and communication devices. Mobile malware has been targeting mobile devices more than ever and seems to be shifted from their traditional host, the personal computers, to more vulnerable victims. In this study, we mainly focus on malware for Android-based mobile devices. We analyze and discuss related malware and recognize its trends and challenges. We also present a comprehensive security solution that addresses the security from malware threats
âAnd all the pieces matter...â Hybrid Testing Methods for Android App's Privacy Analysis
Smartphones have become inherent to the every day life of billions of people worldwide, and they
are used to perform activities such as gaming, interacting with our peers or working. While extremely
useful, smartphone apps also have drawbacks, as they can affect the security and privacy of users.
Android devices hold a lot of personal data from users, including their social circles (e.g., contacts),
usage patterns (e.g., app usage and visited websites) and their physical location. Like in most software
products, Android apps often include third-party code (Software Development Kits or SDKs) to
include functionality in the app without the need to develop it in-house. Android apps and third-party
components embedded in them are often interested in accessing such data, as the online ecosystem
is dominated by data-driven business models and revenue streams like advertising.
The research community has developed many methods and techniques for analyzing the privacy
and security risks of mobile apps, mostly relying on two techniques: static code analysis and dynamic
runtime analysis. Static analysis analyzes the code and other resources of an app to detect potential
app behaviors. While this makes static analysis easier to scale, it has other drawbacks such as
missing app behaviors when developers obfuscate the appâs code to avoid scrutiny. Furthermore,
since static analysis only shows potential app behavior, this needs to be confirmed as it can also
report false positives due to dead or legacy code. Dynamic analysis analyzes the apps at runtime to
provide actual evidence of their behavior. However, these techniques are harder to scale as they need
to be run on an instrumented device to collect runtime data. Similarly, there is a need to stimulate
the app, simulating real inputs to examine as many code-paths as possible. While there are some
automatic techniques to generate synthetic inputs, they have been shown to be insufficient.
In this thesis, we explore the benefits of combining static and dynamic analysis techniques to
complement each other and reduce their limitations. While most previous work has often relied on
using these techniques in isolation, we combine their strengths in different and novel ways that allow
us to further study different privacy issues on the Android ecosystem. Namely, we demonstrate the
potential of combining these complementary methods to study three inter-related issues:
⢠A regulatory analysis of parental control apps. We use a novel methodology that relies on
easy-to-scale static analysis techniques to pin-point potential privacy issues and violations of
current legislation by Android apps and their embedded SDKs. We rely on the results from our
static analysis to inform the way in which we manually exercise the apps, maximizing our ability
to obtain real evidence of these misbehaviors. We study 46 publicly available apps and find
instances of data collection and sharing without consent and insecure network transmissions
containing personal data. We also see that these apps fail to properly disclose these practices
in their privacy policy.
⢠A security analysis of the unauthorized access to permission-protected data without user consent.
We use a novel technique that combines the strengths of static and dynamic analysis, by
first comparing the data sent by applications at runtime with the permissions granted to each
app in order to find instances of potential unauthorized access to permission protected data.
Once we have discovered the apps that are accessing personal data without permission, we
statically analyze their code in order to discover covert- and side-channels used by apps and SDKs to circumvent the permission system. This methodology allows us to discover apps using
the MAC address as a surrogate for location data, two SDKs using the external storage as a
covert-channel to share unique identifiers and an app using picture metadata to gain unauthorized
access to location data.
⢠A novel SDK detection methodology that relies on obtaining signals observed both in the appâs
code and static resources and during its runtime behavior. Then, we rely on a tree structure
together with a confidence based system to accurately detect SDK presence without the need
of any a priory knowledge and with the ability to discern whether a given SDK is part of legacy
or dead code. We prove that this novel methodology can discover third-party SDKs with more
accuracy than state-of-the-art tools both on a set of purpose-built ground-truth apps and on a
dataset of 5k publicly available apps.
With these three case studies, we are able to highlight the benefits of combining static and dynamic
analysis techniques for the study of the privacy and security guarantees and risks of Android
apps and third-party SDKs. The use of these techniques in isolation would not have allowed us to
deeply investigate these privacy issues, as we would lack the ability to provide real evidence of potential
breaches of legislation, to pin-point the specific way in which apps are leveraging cover and side
channels to break Androidâs permission system or we would be unable to adapt to an ever-changing
ecosystem of Android third-party companies.The works presented in this thesis were partially funded within the framework of the following projects
and grants:
⢠European Unionâs Horizon 2020 Innovation Action program (Grant Agreement No. 786741,
SMOOTH Project and Grant Agreement No. 101021377, TRUST AWARE Project).
⢠Spanish Government ODIO NºPID2019-111429RB-C21/PID2019-111429RBC22.
⢠The Spanish Data Protection Agency (AEPD)
⢠AppCensus Inc.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en IngenierĂa TelemĂĄtica por la Universidad Carlos III de MadridPresidente: Srdjan Matic.- Secretario: Guillermo SuĂĄrez-Tangil.- Vocal: Ben Stoc
Improving Android app security and privacy with developers
Existing research has uncovered many security vulnerabilities in Android applications (apps) caused by inexperienced, and unmotivated developers. Especially, the lack of tool support makes it hard for developers to avoid common security and privacy problems in Android apps. As a result, this leads to apps with security vulnerability that exposes end users to a multitude of attacks. This thesis presents a line of work that studies and supports Android developers in writing more secure code. We first studied to which extent tool support can help developers in creating more secure applications. To this end, we developed and evaluated an Android Studio extension that identifies common security problems of Android apps, and provides developers suggestions to more secure alternatives. Subsequently, we focused on the issue of outdated third-party libraries in apps which also is the root cause for a variety of security vulnerabilities. Therefore, we analyzed all popular 3rd party libraries in the Android ecosystem, and provided developers feedback and guidance in the form of tool support in their development environment to fix such security problems. In the second part of this thesis, we empirically studied and measured the impact of user reviews on app security and privacy evolution. Thus, we built a review classifier to identify security and privacy related reviews and performed regression analysis to measure their impact on the evolution of security and privacy in Android apps. Based on our results we proposed several suggestions to improve the security and privacy of Android apps by leveraging user feedbacks to create incentives for developers to improve their apps toward better versions.Die bisherige Forschung zeigt eine Vielzahl von SicherheitslĂźcken in Android-Applikationen auf, welche sich auf unerfahrene und unmotivierte Entwickler zurĂźckfĂźhren lassen. Insbesondere ein Mangel an UnterstĂźtzung durch Tools erschwert es den Entwicklern, häufig auftretende Sicherheits- und Datenschutzprobleme in Android Apps zu vermeiden. Als Folge fĂźhrt dies zu Apps mit Sicherheitsschwachstellen, die Benutzer einer Vielzahl von Angriffen aussetzen. Diese Dissertation präsentiert eine Reihe von Forschungsarbeiten, die Android-Entwickler bei der Entwicklung von sichereren Apps untersucht und unterstĂźtzt. In einem ersten Schritt untersuchten wir, inwieweit die Tool-UnterstĂźtzung Entwicklern beim Schreiben von sicherem Code helfen kann. Zu diesem Zweck entwickelten und evaluierten wir eine Android Studio-Erweiterung, die gängige Sicherheitsprobleme von Android-Apps identifiziert und Entwicklern Vorschläge fĂźr sicherere Alternativen bietet. Daran anknĂźpfend, konzentrierten wir uns auf das Problem veralteter Bibliotheken von Drittanbietern in Apps, die ebenfalls häufig die Ursache von SicherheitslĂźcken sein kĂśnnen. Hierzu analysierten wir alle gängigen 3rd-Party-Bibliotheken im Android-Ăkosystem und gaben den Entwicklern Feedback und Anleitung in Form von Tool-UnterstĂźtzung in ihrer Entwicklungsumgebung, um solche Sicherheitsprobleme zu beheben. Im zweiten Teil dieser Dissertation untersuchten wir empirisch die Auswirkungen von Benutzer-Reviews im Android Appstore auf die Entwicklung der Sicherheit und des Datenschutzes von Apps. Zu diesem Zweck entwickelten wir einen Review-Klassifikator, welcher in der Lage ist sicherheits- und datenschutzbezogene Reviews zu identifizieren. Nachfolgend untersuchten wir den Einfluss solcher Reviews auf die Entwicklung der Sicherheit und des Datenschutzes in Android-Apps mithilfe einer Regressionsanalyse. Basierend auf unseren Ergebnissen präsentieren wir verschiedene Vorschläge zur Verbesserung der Sicherheit und des Datenschutzes von Android-Apps, welche die Reviews der Benutzer zur Schaffung von Anreizen fĂźr Entwickler nutzen
Understanding and measuring privacy violations in Android apps
Increasing data collection and tracking of consumers by todayâs online services is becoming a major problem for individualsâ rights. It raises a serious question about whether such data collection can be legally justified under legislation around the globe. Unfortunately, the community lacks insight into such violations in the mobile ecosystem. In this dissertation, we approach these problems by presenting a line of work that provides a comprehensive understanding of privacy violations in Android apps in the wild and automatically measures such violations at scale. First, we build an automated tool that detects unexpected data access based on user perception when interacting with the appsâ user interface. Subsequently, we perform a large-scale study on Android apps to understand how prevalent violations of GDPRâs explicit consent requirement are in the wild. Finally, until now, no study has systematically analyzed the currently implemented consent notices and whether they conform to GDPR in mobile apps. Therefore, we propose a mostly automated and scalable approach to identify the current practices of implemented consent notices. We then develop an automatic tool that detects data sent out to the Internet with different consent conditions. Our result shows the urgent need for more transparent user interface designs to better inform users of data access and call for new tools to support app developers in this endeavor.Die zunehmende Datenerfassung und Verfolgung von Konsumenten durch die heutigen Online-Dienste wird zu einem groĂen Problem fĂźr individuelle Rechte. Es wirft eine ernsthafte Frage auf, ob eine solche Datenerfassung nach der weltweiten Gesetzgebung juristisch begrĂźndet werden kann. Leider hat die Gemeinschaft keinen Einblick in diese VerstĂśĂe im mobilen Ăkosystem. In dieser Dissertation nähern wir uns diesen Problemen, indem wir eine Arbeitslinie vorstellen, die ein umfassendes Verständnis von Datenschutzverletzungen in Android- Apps in der Praxis bietet und solche VerstĂśĂe automatisch misst. Zunächst entwickeln wir ein automatisiertes Tool, das unvorhergesehene Datenzugriffe basierend auf der Nutzung der Benutzeroberfläche von Apps erkennt. Danach fĂźhren wir eine umfangreiche Studie zu Android-Apps durch, um zu verstehen, wie häufig VerstĂśĂe gegen die ausdrĂźckliche Zustimmung der GDPR vorkommen. SchlieĂlich hat bis jetzt keine Studie systematisch die gegenwärtig implementierten Zustimmungen und deren Ăbereinstimmung mit der GDPR in mobilen Apps analysiert. Daher schlagen wir einen meist automatisierten und skalierbaren Ansatz vor, um die aktuellen Praktiken von Zustimmungen zu identifizieren. Danach entwickeln wir ein Tool, das Daten erkennt, die mit unterschiedlichen Zustimmungsbedingungen ins Internet gesendet werden. Unser Ergebnis zeigt den dringenden Bedarf an einer transparenteren Gestaltung von Benutzeroberflächen, um die Nutzer besser Ăźber den Datenzugriff zu informieren, und wir fordern neue Tools, die App-Entwickler bei diesem Unterfangen unterstĂźtzen. ii
On the Security and Privacy Challenges in Android-based Environments
In the last decade, we have faced the rise of mobile devices as a fundamental tool in our everyday life.
Currently, there are above 6 billion smartphones, and 72% of them are Android devices.
The functionalities of smartphones are enriched by mobile apps through which users can perform operations that in the past have been made possible only on desktop/laptop computing.
Besides, users heavily rely on them for storing even the most sensitive information from a privacy point of view.
However, apps often do not satisfy all minimum security requirements and can be targeted to indirectly attack other devices managed or connected to them (e.g., IoT nodes) that may perform sensitive operations such as health checks, control a smart car or open a smart lock.
This thesis discusses some research activities carried out to enhance the security and privacy of mobile apps by i) proposing novel techniques to detect and mitigate security vulnerabilities and privacy issues, and ii) defining techniques devoted to the security evaluation of apps interacting with complex environments (e.g., mobile-IoT-Cloud).
In the first part of this thesis, I focused on the security and privacy of Mobile Apps. Due to the widespread adoption of mobile apps, it is relatively straightforward for researchers or users to quickly retrieve the app that matches their tastes, as Google provides a reliable search engine. However, it is likewise almost impossible to select apps according to a security footprint (e.g., all apps that enforce SSL pinning).
To overcome this limitation, I present APPregator, a platform that allows users to select apps according to a specific security footprint.
This tool aims to implement state-of-the-art static and dynamic analysis techniques for mobile apps and provide security researchers and analysts with a tool that makes it possible to search for mobile applications under specific functional or security requirements.
Regarding the security status of apps, I studied a particular context of mobile apps: hybrid apps composed of web technologies and native technologies (i.e., Java or Kotlin). In this context, I studied a vulnerability that affected only hybrid apps: the Frame Confusion.
This vulnerability, despite being discovered several years ago, it is still very widespread.
I proposed a methodology implemented in FCDroid that exploits static and dynamic analysis techniques to detect and trigger the vulnerability automatically.
The results of an extensive analysis carried out through FCDroid on a set of the most downloaded apps from the Google Play Store prove that 6.63% (i.e., 1637/24675) of hybrid apps are potentially vulnerable to Frame Confusion.
A side effect of the analysis I carried out through APPregator was suggesting that very few apps may have a privacy policy, despite Google Play Store imposes some strict rules about it and contained in the Google Play Privacy Guidelines.
To empirically verify if that was the case, I proposed a methodology based on the combination of static analysis, dynamic analysis, and machine learning techniques.
The proposed methodology verifies whether each app contains a privacy policy compliant with the Google Play Privacy Guidelines, and if the app accesses privacy-sensitive information only upon the acceptance of the policy by the user.
I then implemented the methodology in a tool, 3PDroid, and evaluated a number of recent and most downloaded Android apps in the Google Play Store.
Experimental results suggest that over 95% of apps access sensitive user privacy information, but only a negligible subset of it (~ 1%) fully complies with the Google Play Privacy Guidelines.
Furthermore, the obtained results have also suggested that the user privacy could be put at risk by mobile apps that keep collecting a plethora of information regarding the user's and the device behavior by relying on third-party analytics libraries.
However, collecting and using such data raised several privacy concerns, mainly because the end-user - i.e., the actual data owner - is out of the loop in this collection process. The existing privacy-enhanced solutions that emerged in the last years follow an ``all or nothing" approach, leaving to the user the sole option to accept or completely deny access to privacy-related data.
To overcome the current state-of-the-art limitations, I proposed a data anonymization methodology, called MobHide, that provides a compromise between the usefulness and privacy of the data collected and gives the user complete control over the sharing process.
For evaluating the methodology, I implemented it in a prototype called HideDroid and tested it on 4500 most-used Android apps of the Google Play Store between November 2020 and January 2021.
In the second part of this thesis, I extended privacy and security considerations outside the boundary of the single mobile device.
In particular, I focused on two scenarios.
The first is composed of an IoT device and a mobile app that have a fruitful integration to resolve and perform specific actions.
From a security standpoint, this leads to a novel and unprecedented attack surface.
To deal with such threats, applying state-of-the-art security analysis techniques on each paradigm can be insufficient.
I claimed that novel analysis methodologies able to systematically analyze the ecosystem as a whole must be put forward.
To this aim, I introduced the idea of APPIoTTe, a novel approach to the security testing of Mobile-IoT hybrid ecosystems, as well as some notes on its implementation working on Android (Mobile) and Android Things (IoT) applications.
The second scenario is composed of an IoT device widespread in the Smart Home environment: the Smart Speaker.
Smart speakers are used to retrieving information, interacting with other devices, and commanding various IoT nodes. To this aim, smart speakers typically take advantage of cloud architectures: vocal commands of the user are sampled, sent through the Internet to be processed, and transmitted back for local execution, e.g., to activate an IoT device.
Unfortunately, even if privacy and security are enforced through state-of-the-art encryption mechanisms, the features of the encrypted traffic, such as the throughput, the size of protocol data units, or the IP addresses, can leak critical information about the users' habits.
In this perspective, I showcase this kind of risk by exploiting machine learning techniques to develop black-box models to classify traffic and implement privacy leaking attacks automatically
- âŚ