6,746 research outputs found

    Location Privacy in Spatial Crowdsourcing

    Full text link
    Spatial crowdsourcing (SC) is a new platform that engages individuals in collecting and analyzing environmental, social and other spatiotemporal information. With SC, requesters outsource their spatiotemporal tasks to a set of workers, who will perform the tasks by physically traveling to the tasks' locations. This chapter identifies privacy threats toward both workers and requesters during the two main phases of spatial crowdsourcing, tasking and reporting. Tasking is the process of identifying which tasks should be assigned to which workers. This process is handled by a spatial crowdsourcing server (SC-server). The latter phase is reporting, in which workers travel to the tasks' locations, complete the tasks and upload their reports to the SC-server. The challenge is to enable effective and efficient tasking as well as reporting in SC without disclosing the actual locations of workers (at least until they agree to perform a task) and the tasks themselves (at least to workers who are not assigned to those tasks). This chapter aims to provide an overview of the state-of-the-art in protecting users' location privacy in spatial crowdsourcing. We provide a comparative study of a diverse set of solutions in terms of task publishing modes (push vs. pull), problem focuses (tasking and reporting), threats (server, requester and worker), and underlying technical approaches (from pseudonymity, cloaking, and perturbation to exchange-based and encryption-based techniques). The strengths and drawbacks of the techniques are highlighted, leading to a discussion of open problems and future work

    Enhancing data privacy and security related process through machine learning

    Get PDF
    In this thesis, we exploit the advantages of Machine learning (ML) in the domains of data security and data privacy. ML is one of the most exciting technologies being developed in the world today. The major advantages of ML technology are its prediction capability and its ability to reduce the need for human activities to perform tasks. These benefits motivated us to exploit ML to improve users' data privacy and security. Firstly, we use ML technology to try to predict the best privacy settings for users, since ML has a strong prediction ability and the average user might find it difficult to properly set up privacy settings due to a lack of knowledge and subsequent lack of decision-making abilities regarding the privacy of their data. Besides, since the ML approach has the potential to considerably cut down on manual efforts by humans, our second task in this thesis is to exploit ML technology to redesign security mechanisms of social media environments that rely on human participation for providing such services. In particular, we use ML to train spam filters for identifying and removing violent, insulting, aggressive, and harassing content creators (a.k.a. spammers) from a social media platform. It helps to solve violent and aggressive issues that have been growing on social media environments. The experimental results show that our proposals are efficient and effective

    Quantum surveillance and 'shared secrets'. A biometric step too far? CEPS Liberty and Security in Europe, July 2010

    Get PDF
    It is no longer sensible to regard biometrics as having neutral socio-economic, legal and political impacts. Newer generation biometrics are fluid and include behavioural and emotional data that can be combined with other data. Therefore, a range of issues needs to be reviewed in light of the increasing privatisation of ‘security’ that escapes effective, democratic parliamentary and regulatory control and oversight at national, international and EU levels, argues Juliet Lodge, Professor and co-Director of the Jean Monnet European Centre of Excellence at the University of Leeds, U

    Foucault in Cyberspace: Surveillance, Sovereignty, and Hardwired Censors

    Get PDF
    This is an essay about law in cyberspace. I focus on three interdependent phenomena: a set of political and legal assumptions that I call the jurisprudence of digital libertarianism, a separate but related set of beliefs about the state\u27s supposed inability to regulate the Internet, and a preference for technological solutions to hard legal issues on-line. I make the familiar criticism that digital libertarianism is inadequate because of its blindness towards the effects of private power, and the less familiar claim that digital libertarianism is also surprisingly blind to the state\u27s own power in cyberspace. In fact, I argue that the conceptual structure and jurisprudential assumptions of digital libertarianism lead its practitioners to ignore the ways in which the state can often use privatized enforcement and state-backed technologies to evade some of the supposed practical (and constitutional) restraints on the exercise of legal power over the Net. Finally, I argue that technological solutions which provide the keys to the first two phenomena are neither as neutral nor as benign as they are currently perceived to be. Some of my illustrations will come from the current Administration proposals for Internet copyright regulation, others from the Communications Decency Act and the cryptography debate. In the process, I make opportunistic and unsystematic use of the late Michel Foucault\u27s work to criticise some the jurisprudential orthodoxy of the Net

    Regulating Access to Adult Content (with Privacy Preservation)

    Get PDF
    In the physical world we have well-established mechanisms for keeping children out of adult-only areas. In the virtual world this is generally replaced by self declaration. Some service providers resort to using heavy-weight identification mechanisms, judging adulthood as a side effect thereof. Collection of identification data arguably constitutes an unwarranted privacy invasion in this context, if carried out merely to perform adulthood estimation. This paper presents a mechanism that exploits the adult's more extensive exposure to public media, relying on the likelihood that they will be able to recall details if cued by a carefully chosen picture. We conducted an online study to gauge the viability of this scheme. With our prototype we were able to predict that the user was a child 99% of the time. Unfortunately the scheme also misclassified too many adults. We discuss our results and suggest directions for future research

    Platforms, the First Amendment and Online Speech: Regulating the Filters

    Get PDF
    In recent years, online platforms have given rise to multiple discussions about what their role is, what their role should be, and whether they should be regulated. The complex nature of these private entities makes it very challenging to place them in a single descriptive category with existing rules. In today’s information environment, social media platforms have become a platform press by providing hosting as well as navigation and delivery of public expression, much of which is done through machine learning algorithms. This article argues that there is a subset of algorithms that social media platforms use to filter public expression, which can be regulated without constitutional objections. A distinction is drawn between algorithms that curate speech for hosting purposes and those that curate for navigation purposes, and it is argued that content navigation algorithms, because of their function, deserve separate constitutional treatment. By analyzing the platforms’ functions independently from one another, this paper constructs a doctrinal and normative framework that can be used to navigate some of the complexity. The First Amendment makes it problematic to interfere with how platforms decide what to host because algorithms that implement content moderation policies perform functions analogous to an editorial role when deciding whether content should be censored or allowed on the platform. Content navigation algorithms, on the other hand, do not face the same doctrinal challenges; they operate outside of the public discourse as mere information conduits and are thus not subject to core First Amendment doctrine. Their function is to facilitate the flow of information to an audience, which in turn participates in public discourse; if they have any constitutional status, it is derived from the value they provide to their audience as a delivery mechanism of information. This article asserts that we should regulate content navigation algorithms to an extent. They undermine the notion of autonomous choice in the selection and consumption of content, and their role in today’s information environment is not aligned with a functioning marketplace of ideas and the prerequisites for citizens in a democratic society to perform their civic duties. The paper concludes that any regulation directed to content navigation algorithms should be subject to a lower standard of scrutiny, similar to the standard for commercial speech

    Enhancing data privacy and security related process through machine learning

    Get PDF
    In this thesis, we exploit the advantages of Machine learning (ML) in the domains of data security and data privacy. ML is one of the most exciting technologies being developed in the world today. The major advantages of ML technology are its prediction capability and its ability to reduce the need for human activities to perform tasks. These benefits motivated us to exploit ML to improve users' data privacy and security. Firstly, we use ML technology to try to predict the best privacy settings for users, since ML has a strong prediction ability and the average user might find it difficult to properly set up privacy settings due to a lack of knowledge and subsequent lack of decision-making abilities regarding the privacy of their data. Besides, since the ML approach has the potential to considerably cut down on manual efforts by humans, our second task in this thesis is to exploit ML technology to redesign security mechanisms of social media environments that rely on human participation for providing such services. In particular, we use ML to train spam filters for identifying and removing violent, insulting, aggressive, and harassing content creators (a.k.a. spammers) from a social media platform. It helps to solve violent and aggressive issues that have been growing on social media environments. The experimental results show that our proposals are efficient and effective
    • 

    corecore