615 research outputs found

    On the Security and Privacy Challenges in Android-based Environments

    Get PDF
    In the last decade, we have faced the rise of mobile devices as a fundamental tool in our everyday life. Currently, there are above 6 billion smartphones, and 72% of them are Android devices. The functionalities of smartphones are enriched by mobile apps through which users can perform operations that in the past have been made possible only on desktop/laptop computing. Besides, users heavily rely on them for storing even the most sensitive information from a privacy point of view. However, apps often do not satisfy all minimum security requirements and can be targeted to indirectly attack other devices managed or connected to them (e.g., IoT nodes) that may perform sensitive operations such as health checks, control a smart car or open a smart lock. This thesis discusses some research activities carried out to enhance the security and privacy of mobile apps by i) proposing novel techniques to detect and mitigate security vulnerabilities and privacy issues, and ii) defining techniques devoted to the security evaluation of apps interacting with complex environments (e.g., mobile-IoT-Cloud). In the first part of this thesis, I focused on the security and privacy of Mobile Apps. Due to the widespread adoption of mobile apps, it is relatively straightforward for researchers or users to quickly retrieve the app that matches their tastes, as Google provides a reliable search engine. However, it is likewise almost impossible to select apps according to a security footprint (e.g., all apps that enforce SSL pinning). To overcome this limitation, I present APPregator, a platform that allows users to select apps according to a specific security footprint. This tool aims to implement state-of-the-art static and dynamic analysis techniques for mobile apps and provide security researchers and analysts with a tool that makes it possible to search for mobile applications under specific functional or security requirements. Regarding the security status of apps, I studied a particular context of mobile apps: hybrid apps composed of web technologies and native technologies (i.e., Java or Kotlin). In this context, I studied a vulnerability that affected only hybrid apps: the Frame Confusion. This vulnerability, despite being discovered several years ago, it is still very widespread. I proposed a methodology implemented in FCDroid that exploits static and dynamic analysis techniques to detect and trigger the vulnerability automatically. The results of an extensive analysis carried out through FCDroid on a set of the most downloaded apps from the Google Play Store prove that 6.63% (i.e., 1637/24675) of hybrid apps are potentially vulnerable to Frame Confusion. A side effect of the analysis I carried out through APPregator was suggesting that very few apps may have a privacy policy, despite Google Play Store imposes some strict rules about it and contained in the Google Play Privacy Guidelines. To empirically verify if that was the case, I proposed a methodology based on the combination of static analysis, dynamic analysis, and machine learning techniques. The proposed methodology verifies whether each app contains a privacy policy compliant with the Google Play Privacy Guidelines, and if the app accesses privacy-sensitive information only upon the acceptance of the policy by the user. I then implemented the methodology in a tool, 3PDroid, and evaluated a number of recent and most downloaded Android apps in the Google Play Store. Experimental results suggest that over 95% of apps access sensitive user privacy information, but only a negligible subset of it (~ 1%) fully complies with the Google Play Privacy Guidelines. Furthermore, the obtained results have also suggested that the user privacy could be put at risk by mobile apps that keep collecting a plethora of information regarding the user's and the device behavior by relying on third-party analytics libraries. However, collecting and using such data raised several privacy concerns, mainly because the end-user - i.e., the actual data owner - is out of the loop in this collection process. The existing privacy-enhanced solutions that emerged in the last years follow an ``all or nothing" approach, leaving to the user the sole option to accept or completely deny access to privacy-related data. To overcome the current state-of-the-art limitations, I proposed a data anonymization methodology, called MobHide, that provides a compromise between the usefulness and privacy of the data collected and gives the user complete control over the sharing process. For evaluating the methodology, I implemented it in a prototype called HideDroid and tested it on 4500 most-used Android apps of the Google Play Store between November 2020 and January 2021. In the second part of this thesis, I extended privacy and security considerations outside the boundary of the single mobile device. In particular, I focused on two scenarios. The first is composed of an IoT device and a mobile app that have a fruitful integration to resolve and perform specific actions. From a security standpoint, this leads to a novel and unprecedented attack surface. To deal with such threats, applying state-of-the-art security analysis techniques on each paradigm can be insufficient. I claimed that novel analysis methodologies able to systematically analyze the ecosystem as a whole must be put forward. To this aim, I introduced the idea of APPIoTTe, a novel approach to the security testing of Mobile-IoT hybrid ecosystems, as well as some notes on its implementation working on Android (Mobile) and Android Things (IoT) applications. The second scenario is composed of an IoT device widespread in the Smart Home environment: the Smart Speaker. Smart speakers are used to retrieving information, interacting with other devices, and commanding various IoT nodes. To this aim, smart speakers typically take advantage of cloud architectures: vocal commands of the user are sampled, sent through the Internet to be processed, and transmitted back for local execution, e.g., to activate an IoT device. Unfortunately, even if privacy and security are enforced through state-of-the-art encryption mechanisms, the features of the encrypted traffic, such as the throughput, the size of protocol data units, or the IP addresses, can leak critical information about the users' habits. In this perspective, I showcase this kind of risk by exploiting machine learning techniques to develop black-box models to classify traffic and implement privacy leaking attacks automatically

    Towards privacy-compliant mobile computing

    Get PDF
    Sophisticated mobile computing, sensing and recording devices like smartphones, smartwatches, and wearable cameras are carried by their users virtually around the clock, blurring the distinction between the online and offline worlds. While these devices enable transformative new applications and services, they also introduce entirely new threats to users’ privacy because they can capture a complete record of the user’s location, online and offline activities, and social encounters, including an audiovisual record. Such a record of users’ personal information is highly sensitive and is subject to numerous privacy risks. In this thesis, we have investigated and built systems to mitigate two such privacy risks: 1) privacy risks due to ubiquitous digital capture, where bystanders may inadvertently be captured in photos and videos recorded by other nearby users, 2) privacy risks to users’ personal information introduced by a popular class of apps called ‘mobile social apps’. In this thesis, we present two systems, called I-Pic and EnCore, built to mitigate these two privacy risks. Both systems aim to put the users back in control of what personal information is being collected and shared, while still enabling innovative new applications. We built working prototypes of both systems and evaluated them through actual user deployments. Overall we demonstrate that it is possible to achieve privacy-compliant digital capture and it is possible to build privacy-compliant mobile social apps, while preserving their intended functionality and ease-of-use. Furthermore, we also explore how the two solutions can be merged into a powerful combination, one which could enable novel workflows for specifying privacy preferences in image capture that do not currently exist.Die heutigen Geräte zur mobilen Kommunikation, und Messdatenerfassung und - aufzeichnung, wie Smartphones, Smartwatches und Sport-Kameras werden in der Regel von ihren Besitzern rund um die Uhr getragen, so daß der Unterschied zwischen Online- und Offline-Zeiten zunehmend verschwimmt. Diese Geräte ermöglichen zwar völlig neue Applikationen und Dienste, gefährden aber gleichzeitig die Privatsphäre ihrer Nutzer, weil sie den Standort, die gesamten On-und Offline Aktivitäten, sowie die soziale Beziehungen protokollieren, bis hin zu audio-visuellen Aufzeichnungen. Solche persönlichen Nutzerdaten sind extrem schützenswert und sind verschiedenen Risiken in Bezug auf die Privatsphäre ausgesetzt. In dieser These haben wir Systeme untersucht und gebaut, die zwei dieser Risiken für die Privatsphäre minimieren: 1) Risiko der Privatssphäre wegen omnipräsenter digitaler Aufzeichnungen Dritter, bei denen Unbeteiligte unbeabsichtigt (oder gegen ihren Wunsch) in Fotos und Videos festgehalten werden 2) Risiko für die persönlichen Informationen der Nutzer welche durch die bekannte Kategorie der sozialen Applikationen herbeigeführt werden. In dieser These stellen wir zwei Systeme, namens I-Pic und EnCore vor, welche die zwei Privatssphäre-Risiken minimieren. Beide System wollen dem Benutzer die Kontrolle zurückgeben, zu entscheiden welche seiner persönlichen Daten gesammelt und geteilt werden, während weiterhin neue innovative Applikationen ermöglicht werden. Wir haben für beide Systeme funktionsfähige Prototypen gebaut und diese mit echten Nutzerdaten evaluiert. Wir können generell zeigen dass es möglich ist, digitale Aufzeichnung zu machen, und soziale Applikationen zu bauen, welche nicht die Privatsphäre verletzen, ohne dabei die beabsichtige Funktionalität zu verlieren oder die Bedienbarkeit zu mindern. Des weiteren erforschen wir, wie diese zwei Systeme zu einem leistungsfähigeren Ansatz zusammengeführt werden können, welcher neuartige Workflows ermöglicht, um Einstellungen zur Privatsphäre für digitale Aufzeichnungen vorzunehmen, die es heute noch nicht gibt

    DEVELOPING A METADATA REPOSITORY FOR DISTRIBUTED FILE ANNOTATION AND SHARING

    Get PDF
    Research data is being generated and modified at an increasingly accelerated rate. Iterations and derivations are being crafted at an almost equal velocity. With this increase comes a growing need to track the metadata about the data being generated. Where did this dataset originate? What exactly do the column headers mean? Who was the original publisher? Do I have the latest version of the data? This is to only name a few. As data is shared second or third-hand, or via alternative methods such as physical media or cloud based storage mechanisms, the veracity of the implicit metadata becomes circumstantial. This research quantified and contrasted existing file metadata management solutions, showing their inadequacy to solve the above stated problem, and highlighted the need for a new solution. The system subsequently established and developed by this research was designed to allow for arbitrary file metadata definitions across file systems in a collaborative manner, while facilitating platform independence and easy adoption

    Privacy in the Smart City - Applications, Technologies, Challenges and Solutions

    Get PDF
    Many modern cities strive to integrate information technology into every aspect of city life to create so-called smart cities. Smart cities rely on a large number of application areas and technologies to realize complex interactions between citizens, third parties, and city departments. This overwhelming complexity is one reason why holistic privacy protection only rarely enters the picture. A lack of privacy can result in discrimination and social sorting, creating a fundamentally unequal society. To prevent this, we believe that a better understanding of smart cities and their privacy implications is needed. We therefore systematize the application areas, enabling technologies, privacy types, attackers and data sources for the attacks, giving structure to the fuzzy term “smart city”. Based on our taxonomies, we describe existing privacy-enhancing technologies, review the state of the art in real cities around the world, and discuss promising future research directions. Our survey can serve as a reference guide, contributing to the development of privacy-friendly smart cities

    The Data Journalism Handbook

    Get PDF

    Research Methods for the Digital Humanities

    Get PDF
    In holistic Digital Humanities studies of information infrastructure, we cannot rely solely on the selection of any given techniques from various disciplines. In addition to selecting our research methods pragmatically, for their relative efficacy at answering a part of a research question, we must also attend to the way in which those methods complement or contradict one another. In my study on West African network backbone infrastructure, I use the tools of different humanities, social-sciences, and computer science disciplines depending not only on the type of information that they help glean, but also on how they can build upon one another as I move through the phases of the study. Just as the architecture of information infrastructure includes discrete “layers” of machines, processes, human activity, and concepts, so too does the study of that architecture allow for multiple layers of abstraction and assumption, each a useful part of a unified, interdisciplinary approach

    Data-Driven, Personalized Usable Privacy

    Get PDF
    We live in the "inverse-privacy" world, where service providers derive insights from users' data that the users do not even know about. This has been fueled by the advancements in machine learning technologies, which allowed providers to go beyond the superficial analysis of users' transactions to the deep inspection of users' content. Users themselves have been facing several problems in coping with this widening information discrepancy. Although the interfaces of apps and websites are generally equipped with privacy indicators (e.g., permissions, policies, ...), this has not been enough to create the counter-effect. We particularly identify three of the gaps that hindered the effectiveness and usability of privacy indicators: - Scale Adaptation: The scale at which service providers are collecting data has been growing on multiple fronts. Users, on the other hand, have limited time, effort, and technological resources to cope with this scale. - Risk Communication: Although providers utilize privacy indicators to announce what and (less often) why they need particular pieces of information, they rarely relay what can be potentially inferred from this data. Without this knowledge, users are less equipped to make informed decisions when they sign in to a site or install an application. - Language Complexity: The information practices of service providers are buried in complex, long privacy policies. Generally, users do not have the time and sometimes the skills to decipher such policies, even when they are interested in knowing particular pieces of it. In this thesis, we approach usable privacy from a data perspective. Instead of static privacy interfaces that are obscure, recurring, or unreadable, we develop techniques that bridge the understanding gap between users and service providers. Towards that, we make the following contributions: - Crowdsourced, data-driven privacy decision-making: In an effort to combat the growing scale of data exposure, we consider the context of files uploaded to cloud services. We propose C3P, a framework for automatically assessing the sensitivity of files, thus enabling realtime, fine-grained policy enforcement on top of unstructured data. - Data-driven app privacy indicators: We introduce PrivySeal, which involves a new paradigm of dynamic, personalized app privacy indicators that bridge the risk under- standing gap between users and providers. Through PrivySeal's online platform, we also study the emerging problem of interdependent privacy in the context of cloud apps and provide a usable privacy indicator to mitigate it. - Automated question answering about privacy practices: We introduce PriBot, the first automated question-answering system for privacy policies, which allows users to pose their questions about the privacy practices of any company with their own language. Through a user study, we show its effectiveness at achieving high accuracy and relevance for users, thus narrowing the complexity gap in navigating privacy policies. A core aim of this thesis is paving the road for a future where privacy indicators are not bound by a specific medium or pre-scripted wording. We design and develop techniques that enable privacy to be communicated effectively in an interface that is approachable to the user. For that, we go beyond textual interfaces to enable dynamic, visual, and hands-free privacy interfaces that are fit for the variety of emerging technologies

    Measuring and Mitigating Security and Privacy Issues on Android Applications

    Get PDF
    Over time, the increasing popularity of the Android operating system (OS) has resulted in its user-base surging past 1 billion unique devices. As a result, cybercriminals and other non-criminal actors are attracted to the OS due to the amount of user information they can access. Aiming to investigate security and privacy issues on the Android ecosystem, previous work has shown that it is possible for malevolent actors to steal users' sensitive personal information over the network, via malicious applications, or vulnerability exploits etc., presenting proof of concepts or evidences of exploits. Due to the ever-changing nature of the Android ecosystem and the arms race involved in detecting and mitigating malicious applications, it is important to continuously examine the ecosystem for security and privacy issues. This thesis presents research contributions in this space, and it is divided into two parts. The first part focuses on measuring and mitigating vulnerabilities in applications due to poor implementation of security and privacy protocols. In particular, we investigate the implementation of the SSL/TLS protocol validation logic, and properties such as ephemerality, anonymity, and end-to-end encryption. We show that, despite increased awareness of vulnerabilities in SSL/TLS implementation by application developers, these vulnerabilities are still present in popular applications, allowing malicious actors steal users' information. To help developers mitigate them, we provide useful recommendations such as enabling SSL/TLS pinning and using the same certificate validation logic in their test and development environments. The second part of this thesis focuses on the detection of malicious applications that compromise users' security and privacy, the detection performance of the different program analysis approach, and the influence of different input generators during dynamic analysis on detection performance. We present a novel method for detecting malicious applications, which is less susceptible to the evolution of the Android ecosystem (i.e., changes in the Android framework as a result of the addition/removal of API calls in new releases) and malware (i.e., changes in techniques to evade detection) compared to previous methods. Overall, this thesis contributes to knowledge around Android apps with respect to, vulnerability discovery that leads to loss of users' security and privacy, and the design of robust Android malware detection tools. It highlights the need for continual evaluation of apps as the ecosystem changes to detect and prevent vulnerabilities and malware that results in a compromise of users' security and privacy

    Mobilizing the Past for a Digital Future : The Potential of Digital Archaeology

    Get PDF
    Mobilizing the Past is a collection of 20 articles that explore the use and impact of mobile digital technology in archaeological field practice. The detailed case studies present in this volume range from drones in the Andes to iPads at Pompeii, digital workflows in the American Southwest, and examples of how bespoke, DIY, and commercial software provide solutions and craft novel challenges for field archaeologists. The range of projects and contexts ensures that Mobilizing the Past for a Digital Future is far more than a state-of-the-field manual or technical handbook. Instead, the contributors embrace the growing spirit of critique present in digital archaeology. This critical edge, backed by real projects, systems, and experiences, gives the book lasting value as both a glimpse into present practices as well as the anxieties and enthusiasm associated with the most recent generation of mobile digital tools. This book emerged from a workshop funded by the National Endowment for the Humanities held in 2015 at Wentworth Institute of Technology in Boston. The workshop brought together over 20 leading practitioners of digital archaeology in the U.S. for a weekend of conversation. The papers in this volume reflect the discussions at this workshop with significant additional content. Starting with an expansive introduction and concluding with a series of reflective papers, this volume illustrates how tablets, connectivity, sophisticated software, and powerful computers have transformed field practices and offer potential for a radically transformed discipline.https://dc.uwm.edu/arthist_mobilizingthepast/1000/thumbnail.jp
    corecore