30 research outputs found

    Scramble! Your Social Network Data

    Full text link

    Resolving Multi-party Privacy Conflicts in Social Media

    Get PDF
    Items shared through Social Media may affect more than one user's privacy --- e.g., photos that depict multiple users, comments that mention multiple users, events in which multiple users are invited, etc. The lack of multi-party privacy management support in current mainstream Social Media infrastructures makes users unable to appropriately control to whom these items are actually shared or not. Computational mechanisms that are able to merge the privacy preferences of multiple users into a single policy for an item can help solve this problem. However, merging multiple users' privacy preferences is not an easy task, because privacy preferences may conflict, so methods to resolve conflicts are needed. Moreover, these methods need to consider how users' would actually reach an agreement about a solution to the conflict in order to propose solutions that can be acceptable by all of the users affected by the item to be shared. Current approaches are either too demanding or only consider fixed ways of aggregating privacy preferences. In this paper, we propose the first computational mechanism to resolve conflicts for multi-party privacy management in Social Media that is able to adapt to different situations by modelling the concessions that users make to reach a solution to the conflicts. We also present results of a user study in which our proposed mechanism outperformed other existing approaches in terms of how many times each approach matched users' behaviour.Comment: Authors' version of the paper accepted for publication at IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Knowledge and Data Engineering, 201

    Access Control in Social Networks: A reachability-Based Approach

    Get PDF
    Nowadays, social networks are attracting more and more users. These social network subscribers may share personal and sensitive information with a large number of possibly unknown other users, which is in constant evolution. This raises the need of giving users more control on the distribution of their shared content which can be accessed by a community far wider than they may expect. Our concern is to devise and enforce an appropriate access control model for online social networks that enables users to specify their privacy preferences in an expressive way, and, scales well over small, as well as, large social graphs (i.e., regardless to the size of the social graph). In this paper, we propose an access control model for online social networks based on connection characteristics between users, in an extended sense that includes indirect connections. This model provides a conditional access to shared resources based on reachability constraints, between the owner and the requester of a piece of information. Then, we describe the work that we have done to scale the access control enforcement performances over large social graphs. This paper describes PhD work carried out at Télécom ParisTech under the guidance of Talel Abdessalem

    Detecting privacy preferences from online social footprints: a literature review

    Get PDF
    Providing personalized content can be of great value to both users and vendors. However, effective personalization hinges on collecting large amounts of personal data about users. With the exponential growth of activities in social networking websites, they have become a prominent platform to gather and analyze such information. Even though there exist a considerable number of social media users with publicly available data, previous studies have revealed a dichotomy between privacy-related intentions and behaviours. Users often face difficulties specifying privacy policies that are consistent with their actual privacy concerns and attitudes, and simply follow the default permissive privacy setting. Therefore, despite the availability of data, it is imperative to develop and employ algorithms to automatically predict users’ privacy preferences for personalization purposes. In this document, we review prior studies that tackle this challenging task and make use of users’ online social footprints to discover their desired privacy settings

    Ethical aspects of doctoral-research advising in the emerging African information society

    Get PDF
    This paper discusses the ethical aspects of doctoral-research advising in the emerging African information society from an African perspective. It addresses the following research questions: What is the status of information ethics in Africa? What theoretical frameworks are available to illuminate the ethical dimension of the emerging African information society? To what extent are ethical aspects of the emerging African information society integrated into doctoral research advising in library and information science in Africa? What are the roles and obligations of the supervisor and supervisee in doctoral research? How is information and communication technology (ICT) being used to enhance doctoral-research advising? The paper is underpinned by various ethical theoretical models, such as the Trust Model, Hayward Power Relations, classical and contemporary ethical traditions, and game theory. It relies upon a literature survey to address the research problems. Results reveal, among other things, the milestones achieved by African scholars in promoting information ethics through curriculum development and research. However, there is a need for the evolving information society to take cognizance of African cultural contexts. The results also reveal that supervisor–supervisee relationships are constrained. The ethical dimension of the emerging African information society should be infused into the doctoral-research process to improve the relationships of supervisor and supervisee. This should be supported by responsible use of ICT, taking into account the Africa cultural context and African values to facilitate the doctoral-advising process. All these should be buttressed by an enabling policy framework at the institutional level to promote harmony and productivity in doctoral research.published or submitted for publicationOpe

    Implicit Contextual Integrity in Online Social Networks

    Get PDF
    Many real incidents demonstrate that users of Online Social Networks need mechanisms that help them manage their interactions by increasing the awareness of the different contexts that coexist in Online Social Networks and preventing them from exchanging inappropriate information in those contexts or disseminating sensitive information from some contexts to others. Contextual integrity is a privacy theory that conceptualises the appropriateness of information sharing based on the contexts in which this information is to be shared. Computational models of Contextual Integrity assume the existence of well-defined contexts, in which individuals enact pre-defined roles and information sharing is governed by an explicit set of norms. However, contexts in Online Social Networks are known to be implicit, unknown a priori and ever changing; users relationships are constantly evolving; and the information sharing norms are implicit. This makes current Contextual Integrity models not suitable for Online Social Networks. In this paper, we propose the first computational model of \emph{Implicit} Contextual Integrity, presenting an information model for Implicit Contextual Integrity as well as a so-called Information Assistant Agent that uses the information model to learn implicit contexts, relationships and the information sharing norms in order to help users avoid inappropriate information exchanges and undesired information disseminations. Through an experimental evaluation, we validate the properties of the model proposed. In particular, Information Assistant Agents are shown to: (i) infer the information sharing norms even if a small proportion of the users follow the norms and in presence of malicious users; (ii) help reduce the exchange of inappropriate information and the dissemination of sensitive information with only a partial view of the system and the information received and sent by their users; and (iii) minimise the burden to the users in terms of raising unnecessary alerts
    corecore