12 research outputs found

    Model Checking Social Network Models

    Get PDF
    A social network service is a platform to build social relations among people sharing similar interests and activities. The underlying structure of a social networks service is the social graph, where nodes represent users and the arcs represent the users' social links and other kind of connections. One important concern in social networks is privacy: what others are (not) allowed to know about us. The "logic of knowledge" (epistemic logic) is thus a good formalism to define, and reason about, privacy policies. In this paper we consider the problem of verifying knowledge properties over social network models (SNMs), that is social graphs enriched with knowledge bases containing the information that the users know. More concretely, our contributions are: i) We prove that the model checking problem for epistemic properties over SNMs is decidable; ii) We prove that a number of properties of knowledge that are sound w.r.t. Kripke models are also sound w.r.t. SNMs; iii) We give a satisfaction-preserving encoding of SNMs into canonical Kripke models, and we also characterise which Kripke models may be translated into SNMs; iv) We show that, for SNMs, the model checking problem is cheaper than the one based on standard Kripke models. Finally, we have developed a proof-of-concept implementation of the model-checking algorithm for SNMs.Comment: In Proceedings GandALF 2017, arXiv:1709.0176

    Modeling of Personalized Privacy Disclosure Behavior: A Formal Method Approach

    Full text link
    In order to create user-centric and personalized privacy management tools, the underlying models must account for individual users' privacy expectations, preferences, and their ability to control their information sharing activities. Existing studies of users' privacy behavior modeling attempt to frame the problem from a request's perspective, which lack the crucial involvement of the information owner, resulting in limited or no control of policy management. Moreover, very few of them take into the consideration the aspect of correctness, explainability, usability, and acceptance of the methodologies for each user of the system. In this paper, we present a methodology to formally model, validate, and verify personalized privacy disclosure behavior based on the analysis of the user's situational decision-making process. We use a model checking tool named UPPAAL to represent users' self-reported privacy disclosure behavior by an extended form of finite state automata (FSA), and perform reachability analysis for the verification of privacy properties through computation tree logic (CTL) formulas. We also describe the practical use cases of the methodology depicting the potential of formal technique towards the design and development of user-centric behavioral modeling. This paper, through extensive amounts of experimental outcomes, contributes several insights to the area of formal methods and user-tailored privacy behavior modeling

    Enhancing Key Digital Literacy Skills: Information Privacy, Information Security, and Copyright/Intellectual Property

    Get PDF
    Key Messages Background Knowledge and skills in the areas of information security, information privacy, and copyright/intellectual property rights and protection are of key importance for organizational and individual success in an evolving society and labour market in which information is a core resource. Organizations require skilled and knowledgeable professionals who understand risks and responsibilities related to the management of information privacy, information security, and copyright/intellectual property. Professionals with this expertise can assist organizations to ensure that they and their employees meet requirements for the privacy and security of information in their care and control, and in order to ensure that neither the organization nor its employees contravene copyright provisions in their use of information. Failure to meet any of these responsibilities can expose the organization to reputational harm, legal action and/or financial loss. Context Inadequate or inappropriate information management practices of individual employees are at the root of organizational vulnerabilities with respect to information privacy, information security, and information ownership issues. Users demonstrate inadequate skills and knowledge coupled with inappropriate practices in these areas, and similar gaps at the organizational level are also widely documented. National and international regulatory frameworks governing information privacy, information security, and copyright/intellectual property are complex and in constant flux, placing additional burden on organizations to keep abreast of relevant regulatory and legal responsibilities. Governance and risk management related to information privacy, security, and ownership are critical to many job categories, including the emerging areas of information and knowledge management. There is an increasing need for skilled and knowledgeable individuals to fill organizational roles related to information management, with particular growth in these areas within the past 10 years. Our analysis of current job postings in Ontario supports the demand for skills and knowledge in these areas. Key Competencies We have developed a set of key competencies across a range of areas that responds to these needs by providing a blueprint for the training of information managers prepared for leadership and strategic positions. These competencies are identified in the full report. Competency areas include: conceptual foundations risk assessment tools and techniques for threat responses communications contract negotiation and compliance evaluation and assessment human resources management organizational knowledge management planning; policy awareness and compliance policy development project managemen

    Belief revision, non-monotonic reasoning and secrecy for epistemic agents

    Get PDF
    Software agents are increasingly used to handle the information of individuals and companies. They also exchange this information with other software agents and humans. This raises the need for sophisticated methods of such agents to represent information, to change it, reason with it, and to protect it. We consider these needs for communicating autonomous agents with incomplete information in a partially observable, dynamic environment. The protection of secret information requires the agents to consider the information of agents, and the possible inferences of these. Further, they have to keep track of this information, and they have to anticipate the effects of their actions. In our considered setting the preservation of secrecy is not always possible. Consequently, an agent has to be able to evaluate and minimize the degree of violation of secrecy. Incomplete information calls for non-monotonic logics, which allow to draw tentative conclusions. A dynamic environment calls for operators that change the information of the agent when new information is received. We develop a general framework of agents that represent their information by logical knowledge representation formalisms with the aim to integrate and combine methods for non-monotonic reasoning, for belief change, and methods to protect secret information. For the integration of belief change theory, we develop new change operators that make use of non-monotonic logic in the change process, and new operators for non-monotonic formalisms. We formally prove their adherence to the quality standards taken and adapted from belief revision theory. Based on the resulting framework we develop a formal framework for secrecy aware agents that meet the requirements described above. We consider different settings for secrecy and analyze requirements to preserve secrecy. For the protection of secrecy we elaborate on change operations and the evaluation of actions with respect to secrecy, both declaratively and by providing constructive approaches. We formally prove the adherence of the constructions to the declarative specifications. Further, we develop concrete agent instances of our framework building on and extending the well known BDI agent model. We build complete secrecy aware agents that use our extended BDI model and answer set programming for knowledge representation and reasoning. For the implementation of our agents we developed Angerona, a Java multiagent development framework. It provides a general framework for developing epistemic agents and implements most of the approaches presented in this thesis

    A dynamic logic for privacy compliance

    Get PDF
    Knowledge based privacy policies are more declarative than traditional action based ones, because they specify only what is permitted or forbidden to know, and leave the derivation of the permitted actions to a security monitor. This inference problem is already non trivial with a static privacy policy, and becomes challenging when privacy policies can change over time. We therefore introduce a dynamic modal logic that permits not only to reason about permitted and forbidden knowledge to derive the permitted actions, but also to represent explicitly the declarative privacy policies together with their dynamics. The logic can be used to check both regulatory and behavioral compliance, respectively by checking that the permissions and obligations set up by the security monitor of an organization are not in conflict with the privacy policies, and by checking that these obligations are indeed enforced
    corecore