2,457 research outputs found

    How to Balance Privacy and Money through Pricing Mechanism in Personal Data Market

    Full text link
    A personal data market is a platform including three participants: data owners (individuals), data buyers and market maker. Data owners who provide personal data are compensated according to their privacy loss. Data buyers can submit a query and pay for the result according to their desired accuracy. Market maker coordinates between data owner and buyer. This framework has been previously studied based on differential privacy. However, the previous study assumes data owners can accept any level of privacy loss and data buyers can conduct the transaction without regard to the financial budget. In this paper, we propose a practical personal data trading framework that is able to strike a balance between money and privacy. In order to gain insights on user preferences, we first conducted an online survey on human attitude to- ward privacy and interest in personal data trading. Second, we identify the 5 key principles of personal data market, which is important for designing a reasonable trading frame- work and pricing mechanism. Third, we propose a reason- able trading framework for personal data which provides an overview of how the data is traded. Fourth, we propose a balanced pricing mechanism which computes the query price for data buyers and compensation for data owners (whose data are utilized) as a function of their privacy loss. The main goal is to ensure a fair trading for both parties. Finally, we will conduct an experiment to evaluate the output of our proposed pricing mechanism in comparison with other previously proposed mechanism

    Finding Top-k Dominance on Incomplete Big Data Using Map-Reduce Framework

    Full text link
    Incomplete data is one major kind of multi-dimensional dataset that has random-distributed missing nodes in its dimensions. It is very difficult to retrieve information from this type of dataset when it becomes huge. Finding top-k dominant values in this type of dataset is a challenging procedure. Some algorithms are present to enhance this process but are mostly efficient only when dealing with a small-size incomplete data. One of the algorithms that make the application of TKD query possible is the Bitmap Index Guided (BIG) algorithm. This algorithm strongly improves the performance for incomplete data, but it is not originally capable of finding top-k dominant values in incomplete big data, nor is it designed to do so. Several other algorithms have been proposed to find the TKD query, such as Skyband Based and Upper Bound Based algorithms, but their performance is also questionable. Algorithms developed previously were among the first attempts to apply TKD query on incomplete data; however, all these had weak performances or were not compatible with the incomplete data. This thesis proposes MapReduced Enhanced Bitmap Index Guided Algorithm (MRBIG) for dealing with the aforementioned issues. MRBIG uses the MapReduce framework to enhance the performance of applying top-k dominance queries on huge incomplete datasets. The proposed approach uses the MapReduce parallel computing approach using multiple computing nodes. The framework separates the tasks between several computing nodes that independently and simultaneously work to find the result. This method has achieved up to two times faster processing time in finding the TKD query result in comparison to previously presented algorithms

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure

    A Unifying Hierarchy of Valuations with Complements and Substitutes

    Full text link
    We introduce a new hierarchy over monotone set functions, that we refer to as MPH\mathcal{MPH} (Maximum over Positive Hypergraphs). Levels of the hierarchy correspond to the degree of complementarity in a given function. The highest level of the hierarchy, MPH\mathcal{MPH}-mm (where mm is the total number of items) captures all monotone functions. The lowest level, MPH\mathcal{MPH}-11, captures all monotone submodular functions, and more generally, the class of functions known as XOS\mathcal{XOS}. Every monotone function that has a positive hypergraph representation of rank kk (in the sense defined by Abraham, Babaioff, Dughmi and Roughgarden [EC 2012]) is in MPH\mathcal{MPH}-kk. Every monotone function that has supermodular degree kk (in the sense defined by Feige and Izsak [ITCS 2013]) is in MPH\mathcal{MPH}-(k+1)(k+1). In both cases, the converse direction does not hold, even in an approximate sense. We present additional results that demonstrate the expressiveness power of MPH\mathcal{MPH}-kk. One can obtain good approximation ratios for some natural optimization problems, provided that functions are required to lie in low levels of the MPH\mathcal{MPH} hierarchy. We present two such applications. One shows that the maximum welfare problem can be approximated within a ratio of k+1k+1 if all players hold valuation functions in MPH\mathcal{MPH}-kk. The other is an upper bound of 2k2k on the price of anarchy of simultaneous first price auctions. Being in MPH\mathcal{MPH}-kk can be shown to involve two requirements -- one is monotonicity and the other is a certain requirement that we refer to as PLE\mathcal{PLE} (Positive Lower Envelope). Removing the monotonicity requirement, one obtains the PLE\mathcal{PLE} hierarchy over all non-negative set functions (whether monotone or not), which can be fertile ground for further research

    Complexity Theory, Game Theory, and Economics: The Barbados Lectures

    Full text link
    This document collects the lecture notes from my mini-course "Complexity Theory, Game Theory, and Economics," taught at the Bellairs Research Institute of McGill University, Holetown, Barbados, February 19--23, 2017, as the 29th McGill Invitational Workshop on Computational Complexity. The goal of this mini-course is twofold: (i) to explain how complexity theory has helped illuminate several barriers in economics and game theory; and (ii) to illustrate how game-theoretic questions have led to new and interesting complexity theory, including recent several breakthroughs. It consists of two five-lecture sequences: the Solar Lectures, focusing on the communication and computational complexity of computing equilibria; and the Lunar Lectures, focusing on applications of complexity theory in game theory and economics. No background in game theory is assumed.Comment: Revised v2 from December 2019 corrects some errors in and adds some recent citations to v1 Revised v3 corrects a few typos in v
    • …
    corecore