7 research outputs found

    Privacy and Online Social Networks: A Systematic Literature Review of Concerns, Preservation, and Policies

    Get PDF
    Background: Social media usage is one of the most popular online activities, but with it comes privacy concerns due to how personal data are handled by these social networking sites. Prior literature aimed at identifying users’ privacy concerns as well as user behavior associated with privacy mitigation strategies and policies. However, OSN users continue to divulge private information online and privacy remains an issue. Accordingly, this review aims to present extant research on this topic, and to highlight any potential research gaps. Method: The paper presents a systematic literature review for the period 2006 - 2021, in which 33 full papers that explored privacy concerns in online social networks (OSN), users’ behavior associated with privacy preservation strategies and OSN privacy policies were examined. Results: The findings indicate that users are concerned about their identity being stolen, the disclosure of sensitive information by third-party applications and through data leakage and the degree of control users have over their data. Strategies such as encryption, authentication, and privacy settings configuration, can be used to address users’ concerns. Users generally do not leverage privacy settings available to them, or read the privacy policies, but will opt to share information based on the benefits to be derived from OSNs. Conclusion: OSN users have specific privacy concerns due primarily to the inherent way in which personal data are handled. Different preservation strategies are available to be used by OSN users. Policies are provided to inform users, however, these policies at times are difficult to read and understand, but studies show that there is no direct effect on the behavior of OSN users. Further research is needed to elucidate the correlation between the relative effectiveness of different privacy preservation strategies and the privacy concerns exhibited by users. Extending the research to comparatively assess different social media sites could help with better awareness of the true influence of privacy policies on user behavior

    SoSharP:Recommending Sharing Policies in Multiuser Privacy Scenarios

    Get PDF

    Making Decisions about Self-Disclosure in Online Social Networks

    Get PDF
    This paper explores privacy calculus decision making processes for online social networks (OSN). Content analysis method is applied to analyze data obtained from face-to-face interviews and online survey with open-ended questions of 96 OSN users from different countries. The factors users considered before self-disclosing are explored. The perceived benefits and risks of using OSN and their impact on self-disclosure are also identified. We determine that the perceived risks of OSN usage hinder self-disclosure. It is not clear, however, whether the perceived benefits offset the impact of the risks on self-disclosure behavior. The findings as a whole do not support privacy calculus in OSN settings

    SmarPer: Context-Aware and Automatic Runtime-Permissions for Mobile Devices

    Get PDF
    Permission systems are the main defense that mobile platforms, such as Android and iOS, offer to users to protect their private data from prying apps. However, due to the tension between usability and control, such systems have several limitations that often force users to overshare sensitive data. We address some of these limitations with SmarPer, an advanced permission mechanism for Android. To address the rigidity of current permission systems and their poor matching of users’ privacy preferences, SmarPer relies on contextual information and machine learning methods to predict permission decisions at runtime. Note that the goal of SmarPer is to mimic the users’ decisions, not to make privacy-preserving decisions per se. Using our SmarPer implementation, we collected 8,521 runtime permission decisions from 41 participants in real conditions. With this unique data set, we show that using an efficient Bayesian linear regression model results in a mean correct classification rate of 80% (±3%). This represents a mean relative reduction of approximately 50% in the number of incorrect decisions when compared with a user-defined static permission policy, i.e., the model used in current permission systems. SmarPer also focuses on the suboptimal trade-off between privacy and utility; instead of only “allow” or “deny” type of decisions, SmarPer also offers an “obfuscate” option where users can still obtain utility by revealing partial information to apps. We implemented obfuscation techniques in SmarPer for different data types and evaluated them during our data collection campaign. Our results show that 73% of the participants found obfuscation useful and it accounted for almost a third of the total number of decisions. In short, we are the first to show, using a large dataset of real in situ permission decisions, that it is possible to learn users’ unique decision patterns at runtime using contextual information while supporting data obfuscation; this is an important step towards automating the management of permissions in smartphones

    Gestionnaire de vie privée : un cadre pour la protection de la vie privée dans les interactions entre apprenants

    Get PDF
    L’évolution continue des besoins d’apprentissage vers plus d’efficacitĂ© et plus de personnalisation a favorisĂ© l’émergence de nouveaux outils et dimensions dont l’objectif est de rendre l’apprentissage accessible Ă  tout le monde et adaptĂ© aux contextes technologiques et sociaux. Cette Ă©volution a donnĂ© naissance Ă  ce que l’on appelle l'apprentissage social en ligne mettant l'accent sur l’interaction entre les apprenants. La considĂ©ration de l’interaction a apportĂ© de nombreux avantages pour l’apprenant, Ă  savoir Ă©tablir des connexions, Ă©changer des expĂ©riences personnelles et bĂ©nĂ©ficier d’une assistance lui permettant d’amĂ©liorer son apprentissage. Cependant, la quantitĂ© d'informations personnelles que les apprenants divulguent parfois lors de ces interactions, mĂšne, Ă  des consĂ©quences souvent dĂ©sastreuses en matiĂšre de vie privĂ©e comme la cyberintimidation, le vol d’identitĂ©, etc. MalgrĂ© les prĂ©occupations soulevĂ©es, la vie privĂ©e en tant que droit individuel reprĂ©sente une situation idĂ©ale, difficilement reconnaissable dans le contexte social d’aujourd’hui. En effet, on est passĂ© d'une conceptualisation de la vie privĂ©e comme Ă©tant un noyau des donnĂ©es sensibles Ă  protĂ©ger des pĂ©nĂ©trations extĂ©rieures Ă  une nouvelle vision centrĂ©e sur la nĂ©gociation de la divulgation de ces donnĂ©es. L’enjeu pour les environnements sociaux d’apprentissage consiste donc Ă  garantir un niveau maximal d’interaction pour les apprenants tout en prĂ©servant leurs vies privĂ©es. Au meilleur de nos connaissances, la plupart des innovations dans ces environnements ont portĂ© sur l'Ă©laboration des techniques d’interaction, sans aucune considĂ©ration pour la vie privĂ©e, un Ă©lĂ©ment portant nĂ©cessaire afin de crĂ©er un environnement favorable Ă  l’apprentissage. Dans ce travail, nous proposons un cadre de vie privĂ©e que nous avons appelĂ© « gestionnaire de vie privĂ©e». Plus prĂ©cisĂ©ment, ce gestionnaire se charge de gĂ©rer la protection des donnĂ©es personnelles et de la vie privĂ©e de l’apprenant durant ses interactions avec ses co-apprenants. En s’appuyant sur l’idĂ©e que l’interaction permet d’accĂ©der Ă  l’aide en ligne, nous analysons l’interaction comme une activitĂ© cognitive impliquant des facteurs contextuels, d’autres apprenants, et des aspects socio-Ă©motionnels. L'objectif principal de cette thĂšse est donc de revoir les processus d’entraide entre les apprenants en mettant en oeuvre des outils nĂ©cessaires pour trouver un compromis entre l’interaction et la protection de la vie privĂ©e. ii Ceci a Ă©tĂ© effectuĂ© selon trois niveaux : le premier Ă©tant de considĂ©rer des aspects contextuels et sociaux de l’interaction telle que la confiance entre les apprenants et les Ă©motions qui ont initiĂ© le besoin d’interagir. Le deuxiĂšme niveau de protection consiste Ă  estimer les risques de cette divulgation et faciliter la dĂ©cision de protection de la vie privĂ©e. Le troisiĂšme niveau de protection consiste Ă  dĂ©tecter toute divulgation de donnĂ©es personnelles en utilisant des techniques d’apprentissage machine et d’analyse sĂ©mantique.The emergence of social tools and their integration in learning contexts has fostered interactions and collaboration among learners. The consideration of social interaction has several advantages for learners, mainly establishing new connections, sharing personal experiences and receiving assistance which may improve learning. However, the amount of personal information that learners disclose in these interactions, raise several privacy risks such as identity theft and cyberbullying which may lead to serious consequences. Despite the raised concerns, privacy as a human fundamental right is hardly recognized in today’s social context. Indeed, the conceptualization of privacy as a set of sensitive data to protect from external intrusions is no longer effective in the new social context where the risks come essentially from the self-disclosing behaviors of the learners themselves. With that in mind, the main challenge for social learning environments is to promote social interactions between learners while preserving their privacy. To the best of our knowledge, innovations in social learning environments have only focused on the integration of new social tools, without any consideration of privacy as a necessary factor to establish a favorable learning environment. In fact, integrating social interactions to maintain learners’ engagement and motivation is as necessary as preserving privacy in order to promote learning. Therefore, we propose, in this research, a privacy framework, that we called privacy manager, aiming to preserve the learners’ privacy during their interactions. Considering social interaction as a strategy to seek and request peers’ help in informal learning contexts, we analyze learners’ interaction as a cognitive activity involving contextual, social and emotional factors. Hence, our main goal is to consider all these factors in order to find a tradeoff between the advantages of interaction, mainly seeking peer feedback, and its disadvantages, particularly data disclosure and privacy risks. This was done on three levels: the first level is to help learners interact with appropriate peers, considering their learning competency and their trustworthiness. The second level of protection is to quantify potential disclosure risks and decide about data disclosure. The third level of protection is to analyze learners’ interactions in order to detect and discard any personal data disclosure using machine learning techniques and semantic analysis

    Predicting Privacy Behavior on Online Social Networks

    No full text
    Online Social Networks (OSNs) have come to play an increasingly important role in our social lives, and their inherent privacy problems have become a major concern for users. Can we assist consumers in their privacy decision-making practices, for example by predicting their preferences and giving them personalized advice? In order to accomplish this, we would need to study the factors that affect users’ privacy decision-making practices. In this paper, we intend to comprehensively investigate these factors in light of two common OSN scenarios: the case where other users request access to the user’s information, and the case where the user shares this information voluntarily. Using a real-life dataset from Google+ and three location-sharing datasets, we identify behavioral analogs to psychological variables that are known to affect users’ disclosure behavior: the trustworthiness of the requester/information audience, the sharing tendency of the receiver/information holder, the sensitivity of the requested/shared information, the appropriateness of the request/sharing activity, as well as some contextual information. We also explore how these factors work to affect the privacy decision making. Based on these factors we build a privacy decisionmaking prediction model that can be used to give users personalized advice regarding their privacy decisionmaking practices
    corecore