1,013 research outputs found

    Controlled Data Sharing for Collaborative Predictive Blacklisting

    Get PDF
    Although sharing data across organizations is often advocated as a promising way to enhance cybersecurity, collaborative initiatives are rarely put into practice owing to confidentiality, trust, and liability challenges. In this paper, we investigate whether collaborative threat mitigation can be realized via a controlled data sharing approach, whereby organizations make informed decisions as to whether or not, and how much, to share. Using appropriate cryptographic tools, entities can estimate the benefits of collaboration and agree on what to share in a privacy-preserving way, without having to disclose their datasets. We focus on collaborative predictive blacklisting, i.e., forecasting attack sources based on one's logs and those contributed by other organizations. We study the impact of different sharing strategies by experimenting on a real-world dataset of two billion suspicious IP addresses collected from Dshield over two months. We find that controlled data sharing yields up to 105% accuracy improvement on average, while also reducing the false positive rate.Comment: A preliminary version of this paper appears in DIMVA 2015. This is the full version. arXiv admin note: substantial text overlap with arXiv:1403.212

    What If? A Look at Integrity Pacts

    Get PDF
    This note examines the Integrity Pact (IP) methodology proposed by Transparency International to confront the problem of corruption in public procurement. The examination draws from a decision model for participants developed elsewhere, in which the critical elements are shown to be the vulnerability of the conditions under which the tender is conducted and the risk of bribing. The IP methodology intends to interfere with the central elements in individual tender instantiations by a process of discussion leading to mutual trust; participants and public officials sign a pledge of honesty. Disputes are to be resolved by private arbitration and allegedly enforcement is attained by force of a private contract between participants. Preferably, a civil society organization stimulates and monitors the process and acts as fiducial guarantor. Publicising proceedings stimulates discus-sion and enhances transparency. All this is held to favourably affect the process, leading to better results. This, in turn, is held to affect the overall environment over time. In order to accommodate for the ethical dimension introduced by IPs, the present analysis incorporates an “ethical” factor operating over the conditions under which tenders are conducted. Ascertaining the operation of this hypothetical factor is an empirical question. The examination of IP premises, together with evidence collected from instantiations of the meth-odology, plus the absence of comparative empirical data on bribery, leads to the conclusion that IPs do not heighten the risk of bribing for participants. Contrary to the methodology’s claim, en-forcement, be it from arbitration or otherwise, is shown to be dependent on each particular envi-ronment. Conditions under which particular tenders are conducted might be bettered, but not un-conditionally, as the institutional framework perforce dominates private agreements. The influence of the “ethical” factor cannot be assessed for lack of empirical evidence, and the honesty pledge IPs rely on is argued to be devoid of significance. Although for lack of data the economic effi-ciency of the methodology cannot be ascertained, there is no reason to suppose that IPs do not bet-ter the outcomes piecewise. The methodology fails to address the problem of cartelisation that af-fects public markets, and – perhaps due to the low frequency of its application – does not discuss measures to counterbalance the action of cartels. Interpreting the premises behind the IP idea, it is argued that they stem from a perspective on cor-ruption rooted on morality rather than on the mechanisms that propitiate bribery. Thus, tackling individual instantiations is favoured over confronting systemic factors. IP guidelines stipulate that the absence of allegations of bribery in a tender authorises the sponsor-ing NGO to announce that the tender was “clean”. It is argued that such manifestations of overcon-fidence are hazardous for the reputation of NGOs that adopt the methodology. It is also argued that the continuous involvement of NGOs with IPs raises questions about their entitlement to it, more-over because NGOs are not bound by oversight and accountability constraints that formally charac- terise State organisms. It is contended that for both governments and NGOs, promoting and par-ticipating in IPs is a strategic decision that should be balanced with their effectiveness towards the aim of changing the institutional environment.Control, corruption, integrity pact, public procurement, regulation, Transparency International

    Automatic Detection of Malware-Generated Domains with Recurrent Neural Models

    Get PDF
    Modern malware families often rely on domain-generation algorithms (DGAs) to determine rendezvous points to their command-and-control server. Traditional defence strategies (such as blacklisting domains or IP addresses) are inadequate against such techniques due to the large and continuously changing list of domains produced by these algorithms. This paper demonstrates that a machine learning approach based on recurrent neural networks is able to detect domain names generated by DGAs with high precision. The neural models are estimated on a large training set of domains generated by various malwares. Experimental results show that this data-driven approach can detect malware-generated domain names with a F_1 score of 0.971. To put it differently, the model can automatically detect 93 % of malware-generated domain names for a false positive rate of 1:100.Comment: Submitted to NISK 201

    On Modeling the Costs of Censorship

    Full text link
    We argue that the evaluation of censorship evasion tools should depend upon economic models of censorship. We illustrate our position with a simple model of the costs of censorship. We show how this model makes suggestions for how to evade censorship. In particular, from it, we develop evaluation criteria. We examine how our criteria compare to the traditional methods of evaluation employed in prior works

    The Abandoned Side of the Internet: Hijacking Internet Resources When Domain Names Expire

    Full text link
    The vulnerability of the Internet has been demonstrated by prominent IP prefix hijacking events. Major outages such as the China Telecom incident in 2010 stimulate speculations about malicious intentions behind such anomalies. Surprisingly, almost all discussions in the current literature assume that hijacking incidents are enabled by the lack of security mechanisms in the inter-domain routing protocol BGP. In this paper, we discuss an attacker model that accounts for the hijacking of network ownership information stored in Regional Internet Registry (RIR) databases. We show that such threats emerge from abandoned Internet resources (e.g., IP address blocks, AS numbers). When DNS names expire, attackers gain the opportunity to take resource ownership by re-registering domain names that are referenced by corresponding RIR database objects. We argue that this kind of attack is more attractive than conventional hijacking, since the attacker can act in full anonymity on behalf of a victim. Despite corresponding incidents have been observed in the past, current detection techniques are not qualified to deal with these attacks. We show that they are feasible with very little effort, and analyze the risk potential of abandoned Internet resources for the European service region: our findings reveal that currently 73 /24 IP prefixes and 7 ASes are vulnerable to be stealthily abused. We discuss countermeasures and outline research directions towards preventive solutions.Comment: Final version for TMA 201

    Evaluating IP Blacklists Effectiveness

    Full text link
    IP blacklists are widely used to increase network security by preventing communications with peers that have been marked as malicious. There are several commercial offerings as well as several free-of-charge blacklists maintained by volunteers on the web. Despite their wide adoption, the effectiveness of the different IP blacklists in real-world scenarios is still not clear. In this paper, we conduct a large-scale network monitoring study which provides insightful findings regarding the effectiveness of blacklists. The results collected over several hundred thousand IP hosts belonging to three distinct large production networks highlight that blacklists are often tuned for precision, with the result that many malicious activities, such as scanning, are completely undetected. The proposed instrumentation approach to detect IP scanning and suspicious activities is implemented with home-grown and open-source software. Our tools enable the creation of blacklists without the security risks posed by the deployment of honeypots

    A first look at the misuse and abuse of the IPv4 Transfer Market

    Get PDF
    The depletion of the unallocated address space in combination with the slow pace of IPv6 deployment have given rise to the IPv4 transfer market, namely the trading of allocated IPv4 prefixes between ASes. While RIRs have established detailed policies in an effort to regulate the IPv4 transfer market for malicious networks such as spammers and bulletproof ASes, IPv4 transfers pose an opportunity to bypass reputational penalties of abusive behaviour since they can obtain "clean" address space or offload blacklisted address space. Additionally, IP transfers create a window of uncertainty about legitimate ownership of prefixes, which adversaries to hijack parts of the transferred address space. In this paper, we provide the first detailed study of how transferred IPv4 prefixes are misused in the wild by synthesizing an array of longitudinal IP blacklists and lists of prefix hijacking incidents. Our findings yield evidence that the transferred network blocks are used by malicious networks to address botnets and fraudulent sites in much higher rates compared to non-transferred addresses, while the timing of the attacks indicates efforts to evade filtering mechanisms
    • …
    corecore