9 research outputs found

    Location Based Secure Communication in Mobile Sensor Network

    Get PDF
    Now a day’s Mobile crowd sensing is an new paradigm which is based on power of the crowd jointly with sensing capabilities of various mobile devices such as Smartphone’s or wearable devices.MCS helps users to acquire local information from the surrounding environment through the sensing property of mobile device which is used in many areas like healthcare, transportation, environmental which helps in improving peoples quality of life. But MCS has two major problems like user privacy and data trustworthiness. In this first we discuss the MCS architecture with its characteristics and advantages over wireless sensor network. And at last we will discuss future recent trends as well as our efforts to MCS

    オンラインストレージを用いた分散SNSの設計と実装

    Get PDF
    近年普及しているSNS(Social Networking Service)の多くはユーザコンテンツをSNS事業者が一極所持しているため,必要以上の個人情報が集まっており,プライバシーの問題がある.この問題を解決するため,ソーシャルグラフやユーザコンテンツを複数サーバに分散させる分散SNSが提案されている.たとえばVISはユーザが管理するサーバにデータを保存し,サーバ同士がコンテンツを補完し合って分散SNSを構成する.しかしVISは複数あるサーバの運用がボランティアベースでされているため,サービスの持続性が問題となる.ところで,インターネット上のサーバを介して複数端末間のコンテンツを同期・共有するオンラインストレージサービスが利用されている.オンラインストレージサービスは,ユーザが利用するストレージ容量に応じて事業者にコストを支払うためサービスの持続性がある.オンラインストレージサービスであるDropboxを利用した分散SNSにFrenzyがある.FrenzyではDropboxで同期したファイルをSNSコンテンツとして扱うことでSNSを実現している.ただしFrenzyではフレンド以外とのコミュニケーションができない.またフレンド関係を定義するために,事前にDropbox上で同期・共有設定が必要である.よって,やりとりできる相手が固定的になり,人の間のコミュニティを広げるというSNSの利点が失われている. 本研究では上記の問題を解決したオンラインストレージを用いた分散SNSを提案する.ユーザがフレンドのフレンドの情報を得るために,SNSとして扱うフォルダ構造を定義した.これによりオンラインストレージサービスで自動的にフレンドのフレンドのコンテンツを共有できる仕組みを実現した.また,SNSからフレンドを追加できるようにするため,フォルダにフレンド関係のセマンティクスを定義し,それに基づき共有設定を行うようオンラインストレージの動作を拡張した.評価により,提案システムはプライバシー,持続性,つながりの拡張性を備えたSNSであることが示された.電気通信大学201

    PEPSI: Privacy-Enhanced Participatory Sensing Infrastructure.

    Get PDF
    Participatory Sensing combines the ubiquity of mobile phones with sensing capabilities of Wireless Sensor Networks. It targets pervasive collection of information, e.g., temperature, traffic conditions, or health-related data. As users produce measurements from their mobile devices, voluntary participation becomes essential. However, a number of privacy concerns -- due to the personal information conveyed by data reports -- hinder large-scale deployment of participatory sensing applications. Prior work on privacy protection, for participatory sensing, has often relayed on unrealistic assumptions and with no provably-secure guarantees. The goal of this project is to introduce PEPSI: a Privacy-Enhanced Participatory Sensing Infrastructure. We explore realistic architectural assumptions and a minimal set of (formal) privacy requirements, aiming at protecting privacy of both data producers and consumers. We design a solution that attains privacy guarantees with provable security at very low additional computational cost and almost no extra communication overhead

    IP address multiplexing for VEEs

    Full text link

    Virtual Individual Servers as Privacy-Preserving Proxies for Mobile Devices

    No full text
    People increasingly generate content on their mobile devices and upload it to third-party services such as Facebook and Google Latitude for sharing and backup purposes. Although these services are convenient and useful, their use has important privacy implications due to their centralized nature and their acquisitions of rights to user-contributed content. This paper argues that people’s interests would be be better served by uploading their data to a machine that they themselves own and control. We term these machines Virtual Individual Servers (VISs) because our preferred instantiation is a virtual machine running in a highly-available utility computing infrastructure. By using VISs, people can better protect their privacy because they retain ownership of their data and remain in control over the software and policies that determine what data is shared with whom. This paper also describes a range of applications of VIS proxies. It then presents our initial implementation and evaluation of one of these applications, a decentralized framework for mobile social services based on VISs. Our experience so far suggests that building such applications on top of the VIS concept is feasible and desirable

    Personal Data Management in the Internet of Things

    Get PDF
    Due to a sharp decrease in hardware costs and shrinking form factors, networked sensors have become ubiquitous. Today, a variety of sensors are embedded into smartphones, tablets, and personal wearable devices, and are commonly installed in homes and buildings. Sensors are used to collect data about people in their proximity, referred to as users. The collection of such networked sensors is commonly referred to as the Internet of Things. Although sensor data enables a wide range of applications from security, to efficiency, to healthcare, this data can be used to reveal unwarranted private information about users. Thus it is imperative to preserve data privacy while providing users with a wide variety of applications to process their personal data. Unfortunately, most existing systems do not meet these goals. Users are either forced to release their data to third parties, such as application developers, thus giving up data privacy in exchange for using data-driven applications, or are limited to using a fixed set of applications, such as those provided by the sensor manufacturer. To avoid this trade-off, users may chose to host their data and applications on their personal devices, but this requires them to maintain data backups and ensure application performance. What is needed, therefore, is a system that gives users flexibility in their choice of data-driven applications while preserving their data privacy, without burdening users with the need to backup their data and providing computational resources for their applications. We propose a software architecture that leverages a user's personal virtual execution environment (VEE) to host data-driven applications. This dissertation describes key software techniques and mechanisms that are necessary to enable this architecture. First, we provide a proof-of-concept implementation of our proposed architecture and demonstrate a privacy-preserving ecosystem of applications that process users' energy data as a case study. Second, we present a data management system (called Bolt) that provides applications with efficient storage and retrieval of time-series data, and guarantees the confidentiality and integrity of stored data. We then present a methodology to provision large numbers of personal VEEs on a single physical machine, and demonstrate its use with LinuX Containers (LXC). We conclude by outlining the design of an abstract framework to allow users to balance data privacy and application utility

    Gestionnaire de vie privée : un cadre pour la protection de la vie privée dans les interactions entre apprenants

    Get PDF
    L’évolution continue des besoins d’apprentissage vers plus d’efficacité et plus de personnalisation a favorisé l’émergence de nouveaux outils et dimensions dont l’objectif est de rendre l’apprentissage accessible à tout le monde et adapté aux contextes technologiques et sociaux. Cette évolution a donné naissance à ce que l’on appelle l'apprentissage social en ligne mettant l'accent sur l’interaction entre les apprenants. La considération de l’interaction a apporté de nombreux avantages pour l’apprenant, à savoir établir des connexions, échanger des expériences personnelles et bénéficier d’une assistance lui permettant d’améliorer son apprentissage. Cependant, la quantité d'informations personnelles que les apprenants divulguent parfois lors de ces interactions, mène, à des conséquences souvent désastreuses en matière de vie privée comme la cyberintimidation, le vol d’identité, etc. Malgré les préoccupations soulevées, la vie privée en tant que droit individuel représente une situation idéale, difficilement reconnaissable dans le contexte social d’aujourd’hui. En effet, on est passé d'une conceptualisation de la vie privée comme étant un noyau des données sensibles à protéger des pénétrations extérieures à une nouvelle vision centrée sur la négociation de la divulgation de ces données. L’enjeu pour les environnements sociaux d’apprentissage consiste donc à garantir un niveau maximal d’interaction pour les apprenants tout en préservant leurs vies privées. Au meilleur de nos connaissances, la plupart des innovations dans ces environnements ont porté sur l'élaboration des techniques d’interaction, sans aucune considération pour la vie privée, un élément portant nécessaire afin de créer un environnement favorable à l’apprentissage. Dans ce travail, nous proposons un cadre de vie privée que nous avons appelé « gestionnaire de vie privée». Plus précisément, ce gestionnaire se charge de gérer la protection des données personnelles et de la vie privée de l’apprenant durant ses interactions avec ses co-apprenants. En s’appuyant sur l’idée que l’interaction permet d’accéder à l’aide en ligne, nous analysons l’interaction comme une activité cognitive impliquant des facteurs contextuels, d’autres apprenants, et des aspects socio-émotionnels. L'objectif principal de cette thèse est donc de revoir les processus d’entraide entre les apprenants en mettant en oeuvre des outils nécessaires pour trouver un compromis entre l’interaction et la protection de la vie privée. ii Ceci a été effectué selon trois niveaux : le premier étant de considérer des aspects contextuels et sociaux de l’interaction telle que la confiance entre les apprenants et les émotions qui ont initié le besoin d’interagir. Le deuxième niveau de protection consiste à estimer les risques de cette divulgation et faciliter la décision de protection de la vie privée. Le troisième niveau de protection consiste à détecter toute divulgation de données personnelles en utilisant des techniques d’apprentissage machine et d’analyse sémantique.The emergence of social tools and their integration in learning contexts has fostered interactions and collaboration among learners. The consideration of social interaction has several advantages for learners, mainly establishing new connections, sharing personal experiences and receiving assistance which may improve learning. However, the amount of personal information that learners disclose in these interactions, raise several privacy risks such as identity theft and cyberbullying which may lead to serious consequences. Despite the raised concerns, privacy as a human fundamental right is hardly recognized in today’s social context. Indeed, the conceptualization of privacy as a set of sensitive data to protect from external intrusions is no longer effective in the new social context where the risks come essentially from the self-disclosing behaviors of the learners themselves. With that in mind, the main challenge for social learning environments is to promote social interactions between learners while preserving their privacy. To the best of our knowledge, innovations in social learning environments have only focused on the integration of new social tools, without any consideration of privacy as a necessary factor to establish a favorable learning environment. In fact, integrating social interactions to maintain learners’ engagement and motivation is as necessary as preserving privacy in order to promote learning. Therefore, we propose, in this research, a privacy framework, that we called privacy manager, aiming to preserve the learners’ privacy during their interactions. Considering social interaction as a strategy to seek and request peers’ help in informal learning contexts, we analyze learners’ interaction as a cognitive activity involving contextual, social and emotional factors. Hence, our main goal is to consider all these factors in order to find a tradeoff between the advantages of interaction, mainly seeking peer feedback, and its disadvantages, particularly data disclosure and privacy risks. This was done on three levels: the first level is to help learners interact with appropriate peers, considering their learning competency and their trustworthiness. The second level of protection is to quantify potential disclosure risks and decide about data disclosure. The third level of protection is to analyze learners’ interactions in order to detect and discard any personal data disclosure using machine learning techniques and semantic analysis
    corecore