2 research outputs found

    Machine Learning Against Terrorism: How Big Data Collection and Analysis Influences the Privacy-Security Dilemma

    No full text
    Rapid advancements in machine learning techniques allow mass surveillance to be applied on larger scales and utilize more and more personal data. These developments demand reconsideration of the privacy-security dilemma, which describes the tradeoffs between national security interests and individual privacy concerns. By investigating mass surveillance techniques that use bulk data collection and machine learning algorithms, we show why these methods are unlikely to pinpoint terrorists in order to prevent attacks. The diverse characteristics of terrorist attacks—especially when considering lone-wolf terrorism—lead to irregular and isolated (digital) footprints. The irregularity of data affects the accuracy of machine learning algorithms and the mass surveillance that depends on them which can be explained by three kinds of known problems encountered in machine learning theory: class imbalance, the curse of dimensionality, and spurious correlations. Proponents of mass surveillance often invoke the distinction between collecting data and metadata, in which the latter is understood as a lesser breach of privacy. Their arguments commonly overlook the ambiguity in the definitions of data and metadata and ignore the ability of machine learning techniques to infer the former from the latter. Given the sparsity of datasets used for machine learning in counterterrorism and the privacy risks attendant with bulk data collection, policymakers and other relevant stakeholders should critically re-evaluate the likelihood of success of the algorithms and the collection of data on which they depend.Numerical AnalysisShip Hydromechanics and StructuresDistributed System

    Achieving Sybil-Proofness in DistributedWork Systems

    No full text
    In a multi-agent system where agents provide quantifiable work for each other on a voluntary basis, reputation mechanisms are incorporated to induce cooperation. Hereby agents assign their peers numerical scores based on their reported transaction histories. In such systems, adversaries can launch an attack by creating fake identities called Sybils, who report counterfeit transactions among one another, with the aim of increasing their own scores in the eyes of others. This paper provides new results about the Sybil-proofness of reputation mechanisms. We revisit the impossibility result of Seuken and Parkes (2011), who show that strongly-beneficial Sybil attacks cannot be prevented on reputation mechanisms satisfying three particular requirements. We prove that, under a more rigorous set of definitions of Sybil attack benefit, this result no longer holds. We characterise properties under which reputation mechanisms are susceptible to strongly-beneficial Sybil attacks. Building on our results, we propose a minimal set of requirements for reputation mechanisms to achieve resistance to such attacks, which are stronger than the results by Cheng and Friedman (2005), who show Sybil-proofness of certain asymmetric reputation mechanisms.Distributed SystemsOptimizatio
    corecore