193 research outputs found
Norm Monitoring under Partial Action Observability
In the context of using norms for controlling multi-agent systems, a vitally
important question that has not yet been addressed in the literature is the
development of mechanisms for monitoring norm compliance under partial action
observability. This paper proposes the reconstruction of unobserved actions to
tackle this problem. In particular, we formalise the problem of reconstructing
unobserved actions, and propose an information model and algorithms for
monitoring norms under partial action observability using two different
processes for reconstructing unobserved actions. Our evaluation shows that
reconstructing unobserved actions increases significantly the number of norm
violations and fulfilments detected.Comment: Accepted at the IEEE Transaction on Cybernetic
Resolving Multi-party Privacy Conflicts in Social Media
Items shared through Social Media may affect more than one user's privacy ---
e.g., photos that depict multiple users, comments that mention multiple users,
events in which multiple users are invited, etc. The lack of multi-party
privacy management support in current mainstream Social Media infrastructures
makes users unable to appropriately control to whom these items are actually
shared or not. Computational mechanisms that are able to merge the privacy
preferences of multiple users into a single policy for an item can help solve
this problem. However, merging multiple users' privacy preferences is not an
easy task, because privacy preferences may conflict, so methods to resolve
conflicts are needed. Moreover, these methods need to consider how users' would
actually reach an agreement about a solution to the conflict in order to
propose solutions that can be acceptable by all of the users affected by the
item to be shared. Current approaches are either too demanding or only consider
fixed ways of aggregating privacy preferences. In this paper, we propose the
first computational mechanism to resolve conflicts for multi-party privacy
management in Social Media that is able to adapt to different situations by
modelling the concessions that users make to reach a solution to the conflicts.
We also present results of a user study in which our proposed mechanism
outperformed other existing approaches in terms of how many times each approach
matched users' behaviour.Comment: Authors' version of the paper accepted for publication at IEEE
Transactions on Knowledge and Data Engineering, IEEE Transactions on
Knowledge and Data Engineering, 201
StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection
Over the years, most research towards defenses against adversarial attacks on
machine learning models has been in the image recognition domain. The malware
detection domain has received less attention despite its importance. Moreover,
most work exploring these defenses has focused on several methods but with no
strategy when applying them. In this paper, we introduce StratDef, which is a
strategic defense system based on a moving target defense approach. We overcome
challenges related to the systematic construction, selection, and strategic use
of models to maximize adversarial robustness. StratDef dynamically and
strategically chooses the best models to increase the uncertainty for the
attacker while minimizing critical aspects in the adversarial ML domain, like
attack transferability. We provide the first comprehensive evaluation of
defenses against adversarial attacks on machine learning for malware detection,
where our threat model explores different levels of threat, attacker knowledge,
capabilities, and attack intensities. We show that StratDef performs better
than other defenses even when facing the peak adversarial threat. We also show
that, of the existing defenses, only a few adversarially-trained models provide
substantially better protection than just using vanilla models but are still
outperformed by StratDef
Privacy policy negotiation in social media
Social media involve many shared items, such as photos, which may concern more than one user. The challenge is that users’ individual privacy preferences for the same item may conflict, so an approach that simply merges in some way the users’ privacy preferences may provide unsatisfactory results. Previous proposals to deal with the problem were either time-consuming or did not consider compromises to solve these conflicts (e.g., by considering unilaterally imposed approaches only). We propose a negotiation mechanism for users to agree on a compromise for these conflicts. The second challenge we address in this article relates to the exponential complexity of such a negotiation mechanism. To address this, we propose heuristics that reduce the complexity of the negotiation mechanism and show how substantial benefits can be derived from the use of these heuristics through extensive experimental evaluation that compares the performance of the negotiation mechanism with and without these heuristics. Moreover, we show that one such heuristic makes the negotiation mechanism produce results fast enough to be used in actual social media infrastructures with near-optimal results
Social computing privacy and online relationships
Social computing has revolutionized interpersonal communication. It has introduced the aspect of social relationships which people can utilize to communicate with the vast spectrum of their contacts. However, the major Online Social Networks (OSNs) have been found to be falling short of appropriately accommodating these relationships in their privacy controls which leads to undesirable consequences for the users. This paper highlights some of the shortcomings of the OSNs with respect to their handling of social relationships and enumerates numerous challenges which need to be conquered in order to provide users with a truly social experienc
Towards implicit contextual integrity
Many real incidents demonstrate that users of Online Social Networks need mechanisms that help them manage their interactions by increasing the awareness of the different contexts that coexist in Online Social Networks and preventing users from exchanging inappropriate information in those contexts or disseminating sensitive information from some contexts to others. Contextual integrity is a privacy theory that expresses the appropriateness of information sharing based on the contexts in which this information is to be shared. Computational models of Contextual Integrity assume the existence of well-defined contexts, in which individuals enact pre-defined roles and information sharing is governed by an explicit set of norms. However, contexts in Online Social Networks are known to be implicit, unknown a priori and ever changing; users’ relationships are constantly evolving; and the norms for information sharing are implicit. This makes current Contextual Integrity models not suitable for Online Social Networks. This position paper highlights the limitations of current research to tackle the problem of exchanging inappropriate information and undesired dissemination of information and outlines the desiderata for a new vision that we call Implicit Contextual Integrity
How socially aware are social media privacy controls?
Social media sites are key mediators of online communication. Yet the privacy controls for these sites are not fully socially aware, even when privacy management is known to be fundamental to successful social relationships
- …