44 research outputs found

    An Iterative and Toolchain-Based Approach to Automate Scanning and Mapping Computer Networks

    Full text link
    As today's organizational computer networks are ever evolving and becoming more and more complex, finding potential vulnerabilities and conducting security audits has become a crucial element in securing these networks. The first step in auditing a network is reconnaissance by mapping it to get a comprehensive overview over its structure. The growing complexity, however, makes this task increasingly effortful, even more as mapping (instead of plain scanning), presently, still involves a lot of manual work. Therefore, the concept proposed in this paper automates the scanning and mapping of unknown and non-cooperative computer networks in order to find security weaknesses or verify access controls. It further helps to conduct audits by allowing comparing documented with actual networks and finding unauthorized network devices, as well as evaluating access control methods by conducting delta scans. It uses a novel approach of augmenting data from iteratively chained existing scanning tools with context, using genuine analytics modules to allow assessing a network's topology instead of just generating a list of scanned devices. It further contains a visualization model that provides a clear, lucid topology map and a special graph for comparative analysis. The goal is to provide maximum insight with a minimum of a priori knowledge.Comment: 7 pages, 6 figure

    Evaluating Third-Party Bad Neighborhood Blacklists for Spam Detection

    Get PDF
    The distribution of malicious hosts over the IP address space is far from being uniform. In fact, malicious hosts tend to be concentrate in certain portions of the IP address space, forming the so-called Bad Neighborhoods. This phenomenon has been previously exploited to filter Spam by means of Bad Neighborhood blacklists. In this paper, we evaluate how much a network administrator can rely upon different Bad Neighborhood blacklists generated by third-party sources to fight Spam. One could expect that Bad Neighborhood blacklists generated from different sources contain, to a varying degree, disjoint sets of entries. Therefore, we investigate (i) how specific a blacklist is to its source, and (ii) whether different blacklists can be interchangeably used to protect a target from Spam. We analyze five Bad Neighborhood blacklists generated from real-world measurements and study their effectiveness in protecting three production mail servers from Spam. Our findings lead to several operational considerations on how a network administrator could best benefit from Bad Neighborhood-based Spam filtering

    Design and Research of New Network Address Coding

    Get PDF

    Homomorphic Routing: Private Data Forwarding in the Internet

    Get PDF
    We propose a new private routing and packet forwarding scheme for the Internet---Homomorphic Routing (HR)---that enables endpoints to communicate with one another without divulging source or destination addresses to the routers or service providers along the path. This is achieved via homomorphic encryption, whereby domains can match encrypted address ranges with encrypted destinations of packets without the need of decryption. Compared to approaches such as source or onion routing, HR is a hop-by-hop solution that allows current BGP-like decisions and traffic engineering techniques to remain largely unchanged, while per-flow state need not be maintained by routers. Preliminary performance evaluation shows that HR implies a tolerable computational overhead compared to plain text operations. Through aggregation we can compress inter-domain routing rules to around 5% of those required for current IPv6 and we can organize encrypted forwarding rules so that matching can be achieved in logarithmic time

    Internet Bad Neighborhoods Aggregation

    Get PDF
    Internet Bad Neighborhoods have proven to be an innovative approach for fighting spam. They have also helped to understand how spammers are distributed on the Internet. In our previous works, the size of each bad neighborhood was fixed to a /24 subnetwork. In this paper, however, we investigate if it is feasible to aggregate Internet bad neighborhoods not only at /24, but to any network prefix. To do that, we propose two different aggregation strategies: fixed prefix and variable prefix. The motivation for doing that is to reduce the number of entries in the bad neighborhood list, thus reducing memory storage requirements for intrusion detection solutions. We also introduce two error measures that allow to quantify how much error was incurred by the aggregation process. An evaluation of both strategies was conducted by analyzing real world data in our aggregation prototype

    An analysis of logical network distance on observed packet counts for network telescope data

    Get PDF
    This paper investigates the relationship between the logical distance between two IP addresses on the Internet, and the number of packets captured by a network telescope listening on a network containing one of the addresses. The need for the computation of a manageable measure of quantification of this distance is presented, as an alterna-tive to the raw difference that can be computed between two addresses using their Integer representations. A number of graphical analysis tools and techniques are presented to aid in this analysis. Findings are pre-sented based on a long baseline data set collected at Rhodes Universi-ty over the last three years, using a dedicated Class C (256 IP address) sensor network, and comprising 19 million packets. Of this total, 27% by packet volume originate within the same natural class A network as the telescope, and as such can be seen to be logically close to the collector network

    A Novel Incrementally-Deployable Multi-granularity Multihoming Framework for the Future Internet

    Get PDF
    Abstract-Multihoming practice in the current Internet is limited to hosts and autonomous systems (ASs). It is "connectivity-oriented" without support for user or data multihoming. However, the swift migration of Internet from "connectivity-oriented" to "content-oriented" pattern urges to incorporate user and data level multihoming support in architecture designs instead of just through ad-hoc patches. In this paper, based on our previous research experience, we expand the multihoming concepts to both user and data levels based on the "multiple points of attachment" in a way similar to host multihoming. We propose a new incrementally-deployable multihoming framework by introducing a "realm" concept. The high-level user and data multihoming support can be built on top of the host and AS level multihoming in an incrementally-deployable and flexibly-assembled manner. Realms form a hierarchy of functionally dependable blocks. We define a new dimension of building block--slice which is an incrementally implementable functional unit for multihoming. Besides the long-term support for user and data multihoming, the first step deployment of the new framework is also able to address the short-term routing scalability challenge by reducing the total inter-domain routing table size gradually

    Secure Routing Protocol for Integrated UMTS and WLAN Ad Hoc Networks

    Get PDF
    The integrated UMTS and WLAN ad hoc networks are getting more and more popular as they hold substantial advantages by next generation networks. We introduce a new secure, robust routing protocol specifically designed for next generation technologies and evaluated its performance. The design of the SNAuth_SPERIPv2 secure routing protocol takes advantage to the integrated network, maintaining Quality of Service (QoS) under Wormhole Attack (WHA). This paper compares performance of newly developed secure routing protocol with other security schemes for CBR video streaming service under WHA
    corecore