30 research outputs found

    Audit quality and information asymmetry between traders

    No full text
    In this study, we investigate the association between audit quality and information asymmetry between informed and uninformed traders. We employ three proxies for information asymmetry - absolute price differences, absolute volatility differences, and absolute differences in the long/short ratio of trades - between US stock and options markets and represent audit quality through the appointment of Big n and industry specialist auditors. For a sample of 4062 firm-years between 2002 to 2005, our results indicate that the appointment of Big n and industry specialist auditors is associated with lower information asymmetry measures. Our results are consistent with audit quality playing a role in the quality of financial reporting information and flowing through to the allocation of information among traders. © 2011 AFAANZ

    Differentially private spatial crowdsourcing

    No full text
    In recent years, the popularity of mobile devices has transformed spatial crowdsourcing into a novel mode for performing complicated projects. Workers can perform tasks at specified locations in return for rewards offered by employers. Existing methods ensure the efficiency of their systems by submitting the workers’ exact locations to a centralized server for task assignment, which can lead to privacy violations. Thus, implementing crowsourcing applications while preserving the privacy of workers’ location is a key issue that needs to be tackled. During the process of task assigning and task reporting, workers and requesters are usually required to reveal their locations to potentially untrustworthy entities such as the SC-server, other workers and other requesters, or the server may collect and release the location data of workers and requesters for further analysis, leading to possible privacy breaches. In recent years there have been a number of proposals to provide the privacy preserving capability for SC applications, such as allowing the release of spatial datasets while preserving privacy. This chapter first surveys the current attempts to solve the location privacy problem in SC, and then presents a novel method for reward-based SC with a differential privacy guarantee. A reward allocation mechanism is proposed to adjust each piece of the reward for a task using the distribution of the workers’ locations. Through experimental results, it shows that an optimized-reward method is efficient for spatial crowdsourcing applications

    Differentially private data publishing: Non-interactive setting

    No full text
    This chapter present the non-interactive setting in data publishing, including batch queries publishing, contingency table publishing and synthetic dataset publishing. Non-interactive settings mean all queries are given to the curator at one time. The key challenge for non-interactive publishing is the sensitivity measurement. Correlation between queries will dramatically increase the sensitivity. Two possible methods are proposed to fix this problem: one is decomposing the correlation between batch queries and another is publishing a synthetic dataset with the constraint of differential privacy to answer those proposed queries. Related methods are presented in the synthetic dataset publishing Sections

    Differential Privacy and Applications

    No full text
    Differential Privacy and Application

    Differentially private data publishing: Interactive setting

    No full text
    Interactive settings operate on various aspects of the input data, including transactions, histograms, streams and graph datasets. This chapter discusses publishing scenarios involving these types of input data. In interactive settings, the privacy mechanism receives a user’s query and replies with a noisy answer to preserve privacy. Traditional Laplace mechanisms can only answer sublinear of n queries, which is insufficient in many scenarios. Different mechanisms are discussed to fix this essential weakness

    Differentially location privacy

    No full text
    The Global Positioning System (GPS) module has become a de-facto standard in cell phones and many mobile devices in recent years, hence the booming of location-based services (LBSs) which provide a variety of information services based on location data. As all the LBS providers require the collection and access permission to users’ personal location data, severe privacy concerns are raised at the same time. Therefore, effective privacy preservation is foremost for LBS applications. This chapter presents three methods that apply differential privacy to achieve location privacy for LBSs: the geo-indistinguishability method, the synthetic differentially private trajectory publishing method, and the hierarchical location data publishing method, with an emphasis on the last one. The core of the hierarchical location data publishing method is a private location release algorithm called PriLocation for privacy preserving in location data release. Three private operations, private location clustering, cluster weight perturbation and private location selection, are used by the algorithm to ensure that each individual in the releasing dataset cannot be re-identified by an adversary

    Differentially private data publishing: Settings and mechanisms

    No full text
    Differentially private data publishing aims to output aggregate information to the public without disclosing any individual’s record. Two settings, interactive and non-interactive, are involved in this publishing scenario. In the interactive setting, a query can not be issued until the answer to the previous query has been published. In the non-interactive setting, all queries are given to the curator at one time. The curator can provide answers with full knowledge of the query set. This chapter focuses on the interactive setting in the data publishing

    Privacy preserving for tagging recommender systems

    No full text
    Tagging recommender systems offer users the possibility to annotate resources with personalized tags so as to enable users to easily find suitable tags for a resource. They combine the advantages of automation in traditional recommender systems and flexibility of tagging systems. A large collection of data has been generated by those social network web sites with tagging recommender systems during the last few years, and the issue of privacy in the recommender process has generally been overlooked. An adversary with background information may re-identify a particular user in a tagging dataset and obtain the user’s historical tagging records. Compared to general recommender systems, the privacy problem in tagging recommendation systems is more complicated due to its unique structure and semantic content. In this chapter, we will focus on the dataset releasing for tagging recommender systems and utilize differential privacy to prevent the leaking of private information when releasing the dataset. A private tagging release algorithm is presented in this chapter to provide comprehensive privacy-preserving capability for individuals and maximizing the utility of the released dataset. The algorithm offers a tailored differential privacy mechanism that optimizes the performance of recommendation with a fixed level of privacy

    Future directions and conclusion

    No full text
    While previous chapters provide a thorough description on differential privacy and presents several applications in reality, many interesting and promising issues remain unexplored. The development of social networks provides great opportunities for research on privacy-preserving but also presents a challenge in effective utilization of the large volume of data. There are still other topics that need to be considered in differential privacy, and we consider a few directions, including adaptive data analysis, personalized privacy, multiparty computation, differentially private mechanism design, private genetic data, local differential privacy and learning model publishing

    Differentially private deep learning

    No full text
    In recent years, deep learning has rapidly become one of the most successful approaches to machine learning. The essential idea of deep learning is to apply a multiple-layer structure to extract complex features from high-dimensional data and use those features to build models. However, deep learning models are susceptible to several types of attacks. For example, a centralized collection of photos, speech, and video clips from millions of individuals might meet with privacy risks when they are shared with others. Learning models can also disclose sensitive information. To integrate differential privacy to deep learning, we need to consider two challenges: high sensitivity and limited privacy budget. This chapter first presents the traditional Laplace method and illustrates the limitations of the method, and then present Private SGD Method, Deep Private Auto-Encoder Algorithm and Distributed Private SGD. Each of them is focusing on a particular deep learning algorithm and is dealing with those two challenges in different ways. Finally, this chapter shows several popular datasets that can be used in differentially private deep learning
    corecore