56 research outputs found

    TURP: Managing Trust for Regulating Privacy in Internet of Things

    Get PDF
    Internet of Things [IoT] applications, such as smart home or ambient assisted living systems, promise useful services to end users. Most of these services rely heavily on sharing and aggregating information among devices; many times raising privacy problems. Contrary to traditional systems, where privacy of each user is managed through well-defined policies, the scale, dynamism, and heterogeneity of the IoT systems make it impossible to specify privacy policies for all possible situations. Alternatively, this paper argues that handling of privacy has to be reasoned by the IoT devices independently, depending on the norms, context, as well as the trust among entities. We present a technique, where an IoT device collects information from others, evaluates the trustworthiness of the information sources to decide the suitability of sharing information with others. We demonstrate the applicability of the technique over an IoT pilot study

    GOSU: Computing GOal SUpport with commitments in multiagent systems

    Get PDF
    Goal-based agent architectures have been one of the most effective architectures for designing agents. In such architectures, the state of the agent as well as its goal set are represented explicitly. The agent then uses its set of actions to reach the goals in its goal set. However, in multiagent systems, most of the time, an agent cannot reach a goal only using its own actions but needs other agents to act as well. Commitments have been successfully used to regulate those interactions between agents. This paper proposes a framework and an environment for agents to manage the relations between their commitments and goals. More specifically, we provide an algorithm called GOSU to compute if a given set of commitments can be used to achieve a particular goal. We describe how GOSU can be implemented using the Reactive Event Calculus and demonstrate its capabilities over a case study

    Purposes of and Principles for ABM's in Policy Development: A Proposal

    Get PDF
    We propose three additional ABM purposes and a set of principles to eectively in uence policy development processes. The purposes focus on the value of the modelling process of a model in a policy development. This diers from them usual focus on the purpose of a simulation. The proposal is based on requirements for the ABM process in a policy development context and a case study

    Heuristic-based approaches for (CP)-nets in negotiation

    No full text
    CP-Nets have proven to be an effective representation for capturing preferences. However, their use in multiagent negotiation is not straightforward. The main reason for this is that CP-Nets capture partial ordering of preferences, whereas negotiating agents are required to compare any two outcomes based on the request and offers. This makes it necessary for agents to generate total orders from their CP-Nets. We have previously proposed a heuristic to generate total orders from a given CP-Net. This paper proposes another heuristic based on Borda count, applies it in negotiation, and compares its performance with the previous heuristic

    Uncertainty-Aware Personal Assistant for Making Personalized Privacy Decisions

    Get PDF
    Many software systems, such as online social networks, enable users to share information about themselves. Although the action of sharing is simple, it requires an elaborate thought process on privacy: what to share, with whom to share, and for what purposes. Thinking about these for each piece of content to be shared is tedious. Recent approaches to tackle this problem build personal assistants that can help users by learning what is private over time and recommending privacy labels such as private or public to individual content that a user considers sharing. However, privacy is inherently ambiguous and highly personal. Existing approaches to recommend privacy decisions do not address these aspects of privacy sufficiently. Ideally, a personal assistant should be able to adjust its recommendation based on a given user, considering that user's privacy understanding. Moreover, the personal assistant should be able to assess when its recommendation would be uncertain and let the user make the decision on her own. Accordingly, this article proposes a personal assistant that uses evidential deep learning to classify content based on its privacy label. An important characteristic of the personal assistant is that it can model its uncertainty in its decisions explicitly, determine that it does not know the answer, and delegate from making a recommendation when its uncertainty is high. By factoring in the user's own understanding of privacy, such as risk factors or own labels, the personal assistant can personalize its recommendations per user. We evaluate our proposed personal assistant using a well-known dataset. Our results show that our personal assistant can accurately identify uncertain cases, personalize them to its user's needs, and thus helps users preserve their privacy well

    Explain to Me: Towards Understanding Privacy Decisions.

    Get PDF
    Privacy assistants help users manage their privacy online. Their tasks could vary from detecting privacy violations to recommending sharing actions for content that the user intends to share. Recent work on these tasks are promising and show that privacy assistants can successfully tackle them. However, for such privacy assistants to be employed by users, it is important that these assistants can explain their decisions to users. Accordingly, this paper develops a methodology to create explanations of privacy. The methodology is based on identifying important topics in a domain of interest, providing explanation schemes for decisions, and generating them automatically. We apply our proposed methodology on a real-world privacy data set, which contains images labeled as private or public to explain the labels. We evaluate our approach on a user study that depicts what factors are influential for users to find explanations useful

    A Framework for Formal Modeling and Analysis of Organizations

    Get PDF
    A new, formal, role-based, framework for modeling and analyzing both real world and artificial organizations is introduced. It exploits static and dynamic properties of the organizational model and includes the (frequently ignored) environment. The transition is described from a generic framework of an organization to its deployed model and to the actual agent allocation. For verification and validation of the proposed model, a set of dedicated techniques is introduced. Moreover, where most computational models can handle only two or three layered organizational structures, our framework can handle any arbitrary number of organizational layers. Henceforth, real-world organizations can be modeled and analyzed, as illustrated by a case study, within the DEAL project line. © Springer Science+Business Media, LLC 2007

    Properties of Referral Networks: Emergence of Authority and Trust

    No full text
    Developing, maintaining, and disseminating trust in open environments is crucial. We develop a decentralized approach to trust in the context of service location. Service providers and consumers are modeled as autonomous agents participating in a multiagent system that functions as a referral network. When a service is requested, an agent may provide the requested service or give a referral to another agent. The agents can judge the quality of service obtained. Importantly the agents can adaptively select their neighbors, decide with whom to interact, and choose how to give referrals. The agents ’ actions lead to the evolution of the referral network. We study the emergent properties of referral networks, especially those dealing with their quality, efficiency, and structure. We first show how the exchange of referrals affect locating service providers, then identify undesirable network structures and show under which conditions these net-work structures emerge. When agents refer and change neighbors in certain specific ways, based on link structure, some agents are identified to be substantially more popular or authoritative than others. These asymmetric distributions of popularity and authoritativeness resemble those seen o

    PANOLA: A Personal Assistant for Supporting Users in Preserving Privacy

    No full text
    Privacy is the right of individuals to keep personal information to themselves. When individuals use online systems, they should be given the right to decide what information they would like to share and what to keep private. When a piece of information pertains only to a single individual, preserving privacy is possible by providing the right access options to the user. However, when a piece of information pertains to multiple individuals, such as a picture of a group of friends or a collaboratively edited document, deciding how to share this information and with whom is challenging. The problem becomes more difficult when the individuals who are affected by the information have different, possibly conflicting privacy constraints. Resolving this problem requires a mechanism that takes into account the relevant individuals’ concerns to decide on the privacy configuration of information. Because these decisions need to be made frequently (i.e., per each piece of shared content), the mechanism should be automated. This article presents a personal assistant to help end-users with managing the privacy of their content. When some content that belongs to multiple users is about to be shared, the personal assistants of the users employ an auction-based privacy mechanism to regulate the privacy of the content. To do so, each personal assistant learns the preferences of its user over time and produces bids accordingly. Our proposed personal assistant is capable of assisting users with different personas and thus ensures that people benefit from it as they need it. Our evaluations over multiagent simulations with online social network content show that our proposed personal assistant enables privacy-respecting content sharing
    • …
    corecore