8,755 research outputs found

    Crowd-ML: A Privacy-Preserving Learning Framework for a Crowd of Smart Devices

    Full text link
    Smart devices with built-in sensors, computational capabilities, and network connectivity have become increasingly pervasive. The crowds of smart devices offer opportunities to collectively sense and perform computing tasks in an unprecedented scale. This paper presents Crowd-ML, a privacy-preserving machine learning framework for a crowd of smart devices, which can solve a wide range of learning problems for crowdsensing data with differential privacy guarantees. Crowd-ML endows a crowdsensing system with an ability to learn classifiers or predictors online from crowdsensing data privately with minimal computational overheads on devices and servers, suitable for a practical and large-scale employment of the framework. We analyze the performance and the scalability of Crowd-ML, and implement the system with off-the-shelf smartphones as a proof of concept. We demonstrate the advantages of Crowd-ML with real and simulated experiments under various conditions

    Emotions in context: examining pervasive affective sensing systems, applications, and analyses

    Get PDF
    Pervasive sensing has opened up new opportunities for measuring our feelings and understanding our behavior by monitoring our affective states while mobile. This review paper surveys pervasive affect sensing by examining and considering three major elements of affective pervasive systems, namely; “sensing”, “analysis”, and “application”. Sensing investigates the different sensing modalities that are used in existing real-time affective applications, Analysis explores different approaches to emotion recognition and visualization based on different types of collected data, and Application investigates different leading areas of affective applications. For each of the three aspects, the paper includes an extensive survey of the literature and finally outlines some of challenges and future research opportunities of affective sensing in the context of pervasive computing

    Shopping For Privacy: How Technology in Brick-and-Mortar Retail Stores Poses Privacy Risks for Shoppers

    Get PDF
    As technology continues to rapidly advance, the American legal system has failed to protect individual shoppers from the technology implemented into retail stores, which poses significant privacy risks but does not violate the law. In particular, I examine the technologies implemented into many brick-and-mortar stores today, many of which the average everyday shopper has no idea exists. This Article criticizes these technologies, suggesting that many, if not all of them, are questionable in their legality taking advantage of their status in a legal gray zone. Because the American judicial system cannot adequately protect the individual shopper from these questionable privacy practices, I call upon the Federal Trade Commission, the de facto privacy regulator in the United States, to increase its policing of physical retail stores to protect the shopper from any further harm

    On the Security and Privacy Challenges in Android-based Environments

    Get PDF
    In the last decade, we have faced the rise of mobile devices as a fundamental tool in our everyday life. Currently, there are above 6 billion smartphones, and 72% of them are Android devices. The functionalities of smartphones are enriched by mobile apps through which users can perform operations that in the past have been made possible only on desktop/laptop computing. Besides, users heavily rely on them for storing even the most sensitive information from a privacy point of view. However, apps often do not satisfy all minimum security requirements and can be targeted to indirectly attack other devices managed or connected to them (e.g., IoT nodes) that may perform sensitive operations such as health checks, control a smart car or open a smart lock. This thesis discusses some research activities carried out to enhance the security and privacy of mobile apps by i) proposing novel techniques to detect and mitigate security vulnerabilities and privacy issues, and ii) defining techniques devoted to the security evaluation of apps interacting with complex environments (e.g., mobile-IoT-Cloud). In the first part of this thesis, I focused on the security and privacy of Mobile Apps. Due to the widespread adoption of mobile apps, it is relatively straightforward for researchers or users to quickly retrieve the app that matches their tastes, as Google provides a reliable search engine. However, it is likewise almost impossible to select apps according to a security footprint (e.g., all apps that enforce SSL pinning). To overcome this limitation, I present APPregator, a platform that allows users to select apps according to a specific security footprint. This tool aims to implement state-of-the-art static and dynamic analysis techniques for mobile apps and provide security researchers and analysts with a tool that makes it possible to search for mobile applications under specific functional or security requirements. Regarding the security status of apps, I studied a particular context of mobile apps: hybrid apps composed of web technologies and native technologies (i.e., Java or Kotlin). In this context, I studied a vulnerability that affected only hybrid apps: the Frame Confusion. This vulnerability, despite being discovered several years ago, it is still very widespread. I proposed a methodology implemented in FCDroid that exploits static and dynamic analysis techniques to detect and trigger the vulnerability automatically. The results of an extensive analysis carried out through FCDroid on a set of the most downloaded apps from the Google Play Store prove that 6.63% (i.e., 1637/24675) of hybrid apps are potentially vulnerable to Frame Confusion. A side effect of the analysis I carried out through APPregator was suggesting that very few apps may have a privacy policy, despite Google Play Store imposes some strict rules about it and contained in the Google Play Privacy Guidelines. To empirically verify if that was the case, I proposed a methodology based on the combination of static analysis, dynamic analysis, and machine learning techniques. The proposed methodology verifies whether each app contains a privacy policy compliant with the Google Play Privacy Guidelines, and if the app accesses privacy-sensitive information only upon the acceptance of the policy by the user. I then implemented the methodology in a tool, 3PDroid, and evaluated a number of recent and most downloaded Android apps in the Google Play Store. Experimental results suggest that over 95% of apps access sensitive user privacy information, but only a negligible subset of it (~ 1%) fully complies with the Google Play Privacy Guidelines. Furthermore, the obtained results have also suggested that the user privacy could be put at risk by mobile apps that keep collecting a plethora of information regarding the user's and the device behavior by relying on third-party analytics libraries. However, collecting and using such data raised several privacy concerns, mainly because the end-user - i.e., the actual data owner - is out of the loop in this collection process. The existing privacy-enhanced solutions that emerged in the last years follow an ``all or nothing" approach, leaving to the user the sole option to accept or completely deny access to privacy-related data. To overcome the current state-of-the-art limitations, I proposed a data anonymization methodology, called MobHide, that provides a compromise between the usefulness and privacy of the data collected and gives the user complete control over the sharing process. For evaluating the methodology, I implemented it in a prototype called HideDroid and tested it on 4500 most-used Android apps of the Google Play Store between November 2020 and January 2021. In the second part of this thesis, I extended privacy and security considerations outside the boundary of the single mobile device. In particular, I focused on two scenarios. The first is composed of an IoT device and a mobile app that have a fruitful integration to resolve and perform specific actions. From a security standpoint, this leads to a novel and unprecedented attack surface. To deal with such threats, applying state-of-the-art security analysis techniques on each paradigm can be insufficient. I claimed that novel analysis methodologies able to systematically analyze the ecosystem as a whole must be put forward. To this aim, I introduced the idea of APPIoTTe, a novel approach to the security testing of Mobile-IoT hybrid ecosystems, as well as some notes on its implementation working on Android (Mobile) and Android Things (IoT) applications. The second scenario is composed of an IoT device widespread in the Smart Home environment: the Smart Speaker. Smart speakers are used to retrieving information, interacting with other devices, and commanding various IoT nodes. To this aim, smart speakers typically take advantage of cloud architectures: vocal commands of the user are sampled, sent through the Internet to be processed, and transmitted back for local execution, e.g., to activate an IoT device. Unfortunately, even if privacy and security are enforced through state-of-the-art encryption mechanisms, the features of the encrypted traffic, such as the throughput, the size of protocol data units, or the IP addresses, can leak critical information about the users' habits. In this perspective, I showcase this kind of risk by exploiting machine learning techniques to develop black-box models to classify traffic and implement privacy leaking attacks automatically
    • 

    corecore