56,192 research outputs found

    Emerging privacy challenges and approaches in CAV systems

    Get PDF
    The growth of Internet-connected devices, Internet-enabled services and Internet of Things systems continues at a rapid pace, and their application to transport systems is heralded as game-changing. Numerous developing CAV (Connected and Autonomous Vehicle) functions, such as traffic planning, optimisation, management, safety-critical and cooperative autonomous driving applications, rely on data from various sources. The efficacy of these functions is highly dependent on the dimensionality, amount and accuracy of the data being shared. It holds, in general, that the greater the amount of data available, the greater the efficacy of the function. However, much of this data is privacy-sensitive, including personal, commercial and research data. Location data and its correlation with identity and temporal data can help infer other personal information, such as home/work locations, age, job, behavioural features, habits, social relationships. This work categorises the emerging privacy challenges and solutions for CAV systems and identifies the knowledge gap for future research, which will minimise and mitigate privacy concerns without hampering the efficacy of the functions

    Algorithms that Remember: Model Inversion Attacks and Data Protection Law

    Get PDF
    Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around `model inversion' and `membership inference' attacks, which indicate that the process of turning training data into machine learned systems is not one-way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.Comment: 15 pages, 1 figur

    How to Balance Privacy and Money through Pricing Mechanism in Personal Data Market

    Full text link
    A personal data market is a platform including three participants: data owners (individuals), data buyers and market maker. Data owners who provide personal data are compensated according to their privacy loss. Data buyers can submit a query and pay for the result according to their desired accuracy. Market maker coordinates between data owner and buyer. This framework has been previously studied based on differential privacy. However, the previous study assumes data owners can accept any level of privacy loss and data buyers can conduct the transaction without regard to the financial budget. In this paper, we propose a practical personal data trading framework that is able to strike a balance between money and privacy. In order to gain insights on user preferences, we first conducted an online survey on human attitude to- ward privacy and interest in personal data trading. Second, we identify the 5 key principles of personal data market, which is important for designing a reasonable trading frame- work and pricing mechanism. Third, we propose a reason- able trading framework for personal data which provides an overview of how the data is traded. Fourth, we propose a balanced pricing mechanism which computes the query price for data buyers and compensation for data owners (whose data are utilized) as a function of their privacy loss. The main goal is to ensure a fair trading for both parties. Finally, we will conduct an experiment to evaluate the output of our proposed pricing mechanism in comparison with other previously proposed mechanism
    • …
    corecore