16,175 research outputs found
Demystifying Privacy Policy of Third-Party Libraries in Mobile Apps
The privacy of personal information has received significant attention in
mobile software. Although previous researchers have designed some methods to
identify the conflict between app behavior and privacy policies, little is
known about investigating regulation requirements for third-party libraries
(TPLs). The regulators enacted multiple regulations to regulate the usage of
personal information for TPLs (e.g., the "California Consumer Privacy Act"
requires businesses clearly notify consumers if they share consumers' data with
third parties or not). However, it remains challenging to analyze the legality
of TPLs due to three reasons: 1) TPLs are mainly published on public
repositoriesinstead of app market (e.g., Google play). The public repositories
do not perform privacy compliance analysis for each TPL. 2) TPLs only provide
independent functions or function sequences. They cannot run independently,
which limits the application of performing dynamic analysis. 3) Since not all
the functions of TPLs are related to user privacy, we must locate the functions
of TPLs that access/process personal information before performing privacy
compliance analysis. To overcome the above challenges, in this paper, we
propose an automated system named ATPChecker to analyze whether the Android
TPLs meet privacy-related regulations or not. Our findings remind developers to
be mindful of TPL usage when developing apps or writing privacy policies to
avoid violating regulation
JavaScript Dead Code Identification, Elimination, and Empirical Assessment
Web apps are built by using a combination of HTML, CSS, and JavaScript. While
building modern web apps, it is common practice to make use of third-party
libraries and frameworks, as to improve developers' productivity and code
quality. Alongside these benefits, the adoption of such libraries results in
the introduction of JavaScript dead code, i.e., code implementing unused
functionalities. The costs for downloading and parsing dead code can negatively
contribute to the loading time and resource usage of web apps. The goal of our
study is two-fold. First, we present Lacuna, an approach for automatically
detecting and eliminating JavaScript dead code from web apps. The proposed
approach supports both static and dynamic analyses, it is extensible and can be
applied to any JavaScript code base, without imposing constraints on the coding
style or on the use of specific JavaScript constructs. Secondly, by leveraging
Lacuna we conduct an experiment to empirically evaluate the run-time overhead
of JavaScript dead code in terms of energy consumption, performance, network
usage, and resource usage in the context of mobile web apps. We applied Lacuna
four times on 30 mobile web apps independently developed by third-party
developers, each time eliminating dead code according to a different
optimization level provided by Lacuna. Afterward, each different version of the
web app is executed on an Android device, while collecting measures to assess
the potential run-time overhead caused by dead code. Experimental results,
among others, highlight that the removal of JavaScript dead code has a positive
impact on the loading time of mobile web apps, while significantly reducing the
number of bytes transferred over the network
Navigating the data avalanche: towards supporting developers in developing privacy-friendly children's apps
This paper critically examines the intersection of privacy concerns in childrenâs apps and the support required by developers
to effectively address these concerns. Third-party libraries and software development kits (SDKs) are widely used in mobile
app development, however, these libraries are commonly known for posing significant data privacy risks to users. Recent
research has shown that app developers for children are particularly struggling with the lack of support in navigating the
complex market of third-party SDKs. The support needed for developers to build privacy-friendly apps is largely understudied.
Motivated by the needs of developers and an empirical analysis of 137 âexpert-approvedâ childrenâs apps, we designed
DataAvalanche.io, a web-based tool to support app developers in navigating the privacy and legal implications associated
with common third-party SDKs on the market. Through semi-structured interviews with 12 app developers for children, we
demonstrate that app developers largely perceive the transparency supported by our tool positively. However, they raised
several barriers, including the challenges of adopting privacy-friendly alternatives and the struggle to safeguard their own
legal interests when facing the imbalance of power in the app market. We contribute to our understanding of the open
challenges and barriers faced by app developers in creating privacy-friendly apps for children and provide critical future
design and policy directions
Navigating the Data Avalanche: Towards Supporting Developers in Developing Privacy-Friendly Childrenâs Apps
This paper critically examines the intersection of privacy concerns in children's apps and the support required by developers to effectively address these concerns. Third-party libraries and software development kits (SDKs) are widely used in mobile app development, however, these libraries are commonly known for posing significant data privacy risks to users. Recent research has shown that app developers for children are particularly struggling with the lack of support in navigating the complex market of third-party SDKs. The support needed for developers to build privacy-friendly apps is largely understudied. Motivated by the needs of developers and an empirical analysis of 137 'expert-approved' children's apps, we designed DataAvalanche.io, a web-based tool to support app developers in navigating the privacy and legal implications associated with common third-party SDKs on the market. Through semi-structured interviews with 12 app developers for children, we demonstrate that app developers largely perceive the transparency supported by our tool positively. However, they raised several barriers, including the challenges of adopting privacy-friendly alternatives and the struggle to safeguard their own legal interests when facing the imbalance of power in the app market. We contribute to our understanding of the open challenges and barriers faced by app developers in creating privacy-friendly apps for children and provide critical future design and policy directions
Understanding and supporting app developers towards designing privacy-friendly apps for children
The integration of digital technology in contemporary society has led to children being exposed to and using mobile devices at younger ages. These devices have become an integral part of their daily routines and experiences, playing a crucial role in their socialisation and development. However, the use of these devices is not without drawbacks. The underlying infrastructure of many of the apps available on such devices heavily relies on a vast and intricate data-driven ecosystem. The proliferation of mobile app developers and numerous third-party and fourth-party entities heavily relies on the collection, sharing, transmission, and analysis of personal data, including that of children. The breach of privacy resulting from the extensive data tracking is prevalent and has detrimental effects on children, including the loss of autonomy and trust.
In this thesis, we investigate this problem from the perspective of app developers. We begin by conducting a critical examination of the privacy landscape of popular children's apps in the UK market. In conjunction with a systematic literature review, we develop a research-driven method for evaluating privacy practices in mobile applications. By applying this methodology to a dataset of 137 'expert-approved' children's apps, we reveal that these apps extensively tracked children's data, while providing insufficient user-facing support for children to manage and negotiate these privacy behaviours.
This finding raises the crucial question of barriers to designing privacy-friendly mobile apps for children. To explore this issue, we first conduct a mixed-method study with developers of children's apps, comprising 134 surveys and 20 interviews. Our findings show that while the developers are invested in the best interests of children, they encounter difficulties in navigating the complex data-driven ecosystem, understanding the behaviour of third-party libraries and trackers, as well as the pressure to monetise their apps through privacy-friendly alternatives.
In light of these findings, we carry out a Research through Design approach to elicit latent needs from children's app developers, using a set of 12 ideas, generated through a workshop with design expert, aimed at addressing the identified challenges. These ideas are evaluated with a sample of 20 children's app developers to uncover a set of latent requirements for support, including a demand for increased transparency regarding third-party libraries and easy-to-adopt compliance checking against regulatory guidelines.
Utilising the requirements gathered from the developers, we develop a web-based application that aims to provide transparency about the privacy behaviours of commonly used SDKs and third-party libraries for app developers. We ask a sample of 12 children's app developers to evaluate how features in our application may incentivise developers to consider privacy-friendly alternatives to commonly used SDKs, how they may plan to use it in their development practices, and how it may be improved in the future.
The research in this thesis casts a crucial new perspective upon the current state of privacy in the mobile ecosystem, through carefully-designed observations and attempts to disrupt existing practices of app developers for children. Through this journey, we contribute to the HCI research community and related designers and regulatory bodies with fresh and original insights into the design and development of privacy-friendly mobile applications for children
Recommended from our members
Hardware and software fingerprinting of mobile devices
This dissertation presents novel and practical algorithms to identify the software and hardware components on mobile devices. In particular, we make significant contributions in two challenging areas: library fingerprinting, to identify third-party software libraries, and device fingerprinting, to identify individual hardware components. Our work has significant implications for the privacy and security of mobile platforms.
Software-based library fingerprinting can be used to detect vulnerable libraries and uncover large-scale data collection activities. We develop a novel Android library finger-printing tool, LibID, to reliably identify specific versions of in-app third-party libraries. LibID is more effective against code obfuscation than prior art. When comparing LibID with other tools in identifying the correct library version using obfuscated F-Droid apps, LibID achieves an F1 score of more than 0.5 in all cases while prior work is below 0.25. We also demonstrate the utility of LibID by detecting the use of a vulnerable version of the OkHttp library in nearly 10% of the 3 958 popular apps on the Google Play Store.
Hardware-based device fingerprinting allows apps and websites to invade user privacy by tracking user activity online as the user moves between apps or websites. In particular, we present a new type of device fingerprinting attack, the factory calibration fingerprinting attack, that recovers embedded per-device factory calibration data from motion sensors in a smartphone. We investigate the calibration behaviour of each sensor and show that the calibration fingerprint is fast to generate, does not change over time or after a factory reset, and can be obtained without any special user permissions.
We estimate the entropy of the calibration fingerprint and find the fingerprint is very likely to be globally unique for iOS devices (~67 bits of entropy for iPhone 6S) and recent Google Pixel devices (~57 bits of entropy for Pixel 4/4 XL). By comparison, the fingerprint generated by previous work has at most 13 bits of entropy. Following our disclosures, Apple deployed a fix in iOS 12.2 and Google in Android 11.
Both code obfuscation and factory calibration help to hide software and hardware idiosyncrasies from third-parties, but this dissertation demonstrates that reliable software and hardware fingerprints can still be generated given sufficient knowledge and a suitable approach. Our work has significant practical implications and can be used to improve platform security and protect user privacy.China Scholarship Council
The Boeing Company
Microsoft Researc
Measuring third party tracker power across web and mobile
Third-party networks collect vast amounts of data about users via web sites
and mobile applications. Consolidations among tracker companies can
significantly increase their individual tracking capabilities, prompting
scrutiny by competition regulators. Traditional measures of market share, based
on revenue or sales, fail to represent the tracking capability of a tracker,
especially if it spans both web and mobile. This paper proposes a new approach
to measure the concentration of tracking capability, based on the reach of a
tracker on popular websites and apps. Our results reveal that tracker
prominence and parent-subsidiary relationships have significant impact on
accurately measuring concentration
Characterizing Location-based Mobile Tracking in Mobile Ad Networks
Mobile apps nowadays are often packaged with third-party ad libraries to
monetize user data
- âŚ