203,086 research outputs found

    Tag detection for preventing unauthorized face image processing

    Get PDF
    A new technology is being proposed as a solution to the problem of unintentional facial detection and recognition in pictures in which the individuals appearing want to express their privacy preferences, through the use of different tags. The existing methods for face de-identification were mostly ad hoc solutions that only provided an absolute binary solution in a privacy context such as pixelation, or a bar mask. As the number and users of social networks are increasing, our preferences regarding our privacy may become more complex, leaving these absolute binary solutions as something obsolete. The proposed technology overcomes this problem by embedding information in a tag which will be placed close to the face without being disruptive. Through a decoding method the tag will provide the preferences that will be applied to the images in further stages

    Privacy in Public and the contextual conditions of agency

    Get PDF
    Current technology and surveillance practices make behaviors traceable to persons in unprecedented ways. This causes a loss of anonymity and of many privacy measures relied on in the past. These de facto privacy losses are by many seen as problematic for individual psychology, intimate relations and democratic practices such as free speech and free assembly. I share most of these concerns but propose that an even more fundamental problem might be that our very ability to act as autonomous and purposive agents relies on some degree of privacy, perhaps particularly as we act in public and semi-public spaces. I suggest that basic issues concerning action choices have been left largely unexplored, due to a series of problematic theoretical assumptions at the heart of privacy debates. One such assumption has to do with the influential conceptualization of privacy as pertaining to personal intimate facts belonging to a private sphere as opposed to a public sphere of public facts. As Helen Nissenbaum has pointed out, the notion of privacy in public sounds almost like an oxymoron given this traditional private-public dichotomy. I discuss her important attempt to defend privacy in public through her concept of ‘contextual integrity.’ Context is crucial, but Nissenbaum’s descriptive notion of existing norms seems to fall short of a solution. I here agree with Joel Reidenberg’s recent worries regarding any approach that relies on ‘reasonable expectations’ . The problem is that in many current contexts we have no such expectations. Our contexts have already lost their integrity, so to speak. By way of a functional and more biologically inspired account, I analyze the relational and contextual dynamics of both privacy needs and harms. Through an understanding of action choice as situated and options and capabilities as relational, a more consequence-oriented notion of privacy begins to appear. I suggest that privacy needs, harms and protections are relational. Privacy might have less to do with seclusion and absolute transactional control than hitherto thought. It might instead hinge on capacities to limit the social consequences of our actions through knowing and shaping our perceptible agency and social contexts of action. To act with intent we generally need the ability to conceal during exposure. If this analysis is correct then relational privacy is an important condition for autonomic purposive and responsible agency—particularly in public space. Overall, this chapter offers a first stab at a reconceptualization of our privacy needs as relational to contexts of action. In terms of ‘rights to privacy’ this means that we should expand our view from the regulation and protection of the information of individuals to questions of the kind of contexts we are creating. I am here particularly interested in what I call ‘unbounded contexts’, i.e. cases of context collapses, hidden audiences and even unknowable future agents

    The Washroom Game

    Get PDF
    This article analyses a game where players sequentially choose either to become insiders and pick one of finitely many locations or to remain outsiders. They will only become insiders if a minimum distance to the next player can be assured; their secondary objective is to maximise the minimal distance to other players. This is illustrated by considering the strategic behaviour of men choosing from a set of urinals in a public lavatory. However, besides very similar situations (e.g. settling of residents in a newly developed area, the selection of food patches by foraging animals, choosing seats in waiting rooms or lines in a swimming pool), the game might also relevant to the problem of placing billboards attempting to catch the attention of passers-by or similar economic situations. In the non-cooperative equilibrium, all insiders behave as if they cooperated with each other and minimised the total number of insiders. It is shown that strategic behaviour leads to an equilibrium with substantial underutilization of available locations. Increasing the number of locations tends to decrease utilization. The removal of some locations which leads to gaps can not only increase relative utilization but even absolute maximum capacity.Efficient design of facilities; location games; privacy concerns; strategic entry prevention; unfriendly seating arrangement; urinal problem

    Approximating Hereditary Discrepancy via Small Width Ellipsoids

    Full text link
    The Discrepancy of a hypergraph is the minimum attainable value, over two-colorings of its vertices, of the maximum absolute imbalance of any hyperedge. The Hereditary Discrepancy of a hypergraph, defined as the maximum discrepancy of a restriction of the hypergraph to a subset of its vertices, is a measure of its complexity. Lovasz, Spencer and Vesztergombi (1986) related the natural extension of this quantity to matrices to rounding algorithms for linear programs, and gave a determinant based lower bound on the hereditary discrepancy. Matousek (2011) showed that this bound is tight up to a polylogarithmic factor, leaving open the question of actually computing this bound. Recent work by Nikolov, Talwar and Zhang (2013) showed a polynomial time O~(log⁡3n)\tilde{O}(\log^3 n)-approximation to hereditary discrepancy, as a by-product of their work in differential privacy. In this paper, we give a direct simple O(log⁡3/2n)O(\log^{3/2} n)-approximation algorithm for this problem. We show that up to this approximation factor, the hereditary discrepancy of a matrix AA is characterized by the optimal value of simple geometric convex program that seeks to minimize the largest ℓ∞\ell_{\infty} norm of any point in a ellipsoid containing the columns of AA. This characterization promises to be a useful tool in discrepancy theory

    Fixed-Budget Differentially Private Best Arm Identification

    Full text link
    We study best arm identification (BAI) in linear bandits in the fixed-budget regime under differential privacy constraints, when the arm rewards are supported on the unit interval. Given a finite budget TT and a privacy parameter Δ>0\varepsilon>0, the goal is to minimise the error probability in finding the arm with the largest mean after TT sampling rounds, subject to the constraint that the policy of the decision maker satisfies a certain {\em Δ\varepsilon-differential privacy} (Δ\varepsilon-DP) constraint. We construct a policy satisfying the Δ\varepsilon-DP constraint (called {\sc DP-BAI}) by proposing the principle of {\em maximum absolute determinants}, and derive an upper bound on its error probability. Furthermore, we derive a minimax lower bound on the error probability, and demonstrate that the lower and the upper bounds decay exponentially in TT, with exponents in the two bounds matching order-wise in (a) the sub-optimality gaps of the arms, (b) Δ\varepsilon, and (c) the problem complexity that is expressible as the sum of two terms, one characterising the complexity of standard fixed-budget BAI (without privacy constraints), and the other accounting for the Δ\varepsilon-DP constraint. Additionally, we present some auxiliary results that contribute to the derivation of the lower bound on the error probability. These results, we posit, may be of independent interest and could prove instrumental in proving lower bounds on error probabilities in several other bandit problems. Whereas prior works provide results for BAI in the fixed-budget regime without privacy constraints or in the fixed-confidence regime with privacy constraints, our work fills the gap in the literature by providing the results for BAI in the fixed-budget regime under the Δ\varepsilon-DP constraint.Comment: Accepted to ICLR 202

    An Adaptive Mechanism for Accurate Query Answering under Differential Privacy

    Full text link
    We propose a novel mechanism for answering sets of count- ing queries under differential privacy. Given a workload of counting queries, the mechanism automatically selects a different set of "strategy" queries to answer privately, using those answers to derive answers to the workload. The main algorithm proposed in this paper approximates the optimal strategy for any workload of linear counting queries. With no cost to the privacy guarantee, the mechanism improves significantly on prior approaches and achieves near-optimal error for many workloads, when applied under (\epsilon, \delta)-differential privacy. The result is an adaptive mechanism which can help users achieve good utility without requiring that they reason carefully about the best formulation of their task.Comment: VLDB2012. arXiv admin note: substantial text overlap with arXiv:1103.136
    • 

    corecore