346 research outputs found

    IPv6: a new security challenge

    Get PDF
    Tese de mestrado em Segurança Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2011O Protocolo de Internet versão 6 (IPv6) foi desenvolvido com o intuito de resolver alguns dos problemas não endereçados pelo seu antecessor, o Protocolo de Internet versão 4 (IPv4), nomeadamente questões relacionadas com segurança e com o espaço de endereçamento disponível. São muitos os que na última década têm desenvolvido estudos sobre os investimentos necessários à sua adoção e sobre qual o momento certo para que o mesmo seja adotado por todos os players no mercado. Recentemente, o problema da extinção de endereçamentos públicos a ser disponibilizado pelas diversas Region Internet registry – RIRs - despertou o conjunto de entidades envolvidas para que se agilizasse o processo de migração do IPv4 para o IPv6. Ao contrário do IPv4, esta nova versão considera a segurança como um objetivo fundamental na sua implementação, nesse sentido é recomendado o uso do protocolo IPsec ao nível da camada de rede. No entanto, e devido à imaturidade do protocolo e à complexidade que este período de transição comporta, existem inúmeras implicações de segurança que devem ser consideradas neste período de migração. O objetivo principal deste trabalho é definir um conjunto de boas práticas no âmbito da segurança na implementação do IPv6 que possa ser utilizado pelos administradores de redes de dados e pelas equipas de segurança dos diversos players no mercado. Nesta fase de transição, é de todo útil e conveniente contribuir de forma eficiente na interpretação dos pontos fortes deste novo protocolo assim como nas vulnerabilidades a ele associadas.IPv6 was developed to address the exhaustion of IPv4 addresses, but has not yet seen global deployment. Recent trends are now finally changing this picture and IPv6 is expected to take off soon. Contrary to the original, this new version of the Internet Protocol has security as a design goal, for example with its mandatory support for network layer security. However, due to the immaturity of the protocol and the complexity of the transition period, there are several security implications that have to be considered when deploying IPv6. In this project, our goal is to define a set of best practices for IPv6 Security that could be used by IT staff and network administrators within an Internet Service Provider. To this end, an assessment of some of the available security techniques for IPv6 will be made by means of a set of laboratory experiments using real equipment from an Internet Service Provider in Portugal. As the transition for IPv6 seems inevitable this work can help ISPs in understanding the threats that exist in IPv6 networks and some of the prophylactic measures available, by offering recommendations to protect internal as well as customers’ networks

    Inferring malicious network events in commercial ISP networks using traffic summarisation

    Get PDF
    With the recent increases in bandwidth available to home users, traffic rates for commercial national networks have also been increasing rapidly. This presents a problem for any network monitoring tool as the traffic rate they are expected to monitor is rising on a monthly basis. Security within these networks is para- mount as they are now an accepted home of trade and commerce. Core networks have been demonstrably and repeatedly open to attack; these events have had significant material costs to high profile targets. Network monitoring is an important part of network security, providing in- formation about potential security breaches and in understanding their impact. Monitoring at high data rates is a significant problem; both in terms of processing the information at line rates, and in terms of presenting the relevant information to the appropriate persons or systems. This thesis suggests that the use of summary statistics, gathered over a num- ber of packets, is a sensible and effective way of coping with high data rates. A methodology for discovering which metrics are appropriate for classifying signi- ficant network events using statistical summaries is presented. It is shown that the statistical measures found with this methodology can be used effectively as a metric for defining periods of significant anomaly, and further classifying these anomalies as legitimate or otherwise. In a laboratory environment, these metrics were used to detect DoS traffic representing as little as 0.1% of the overall network traffic. The metrics discovered were then analysed to demonstrate that they are ap- propriate and rational metrics for the detection of network level anomalies. These metrics were shown to have distinctive characteristics during DoS by the analysis of live network observations taken during DoS events. This work was implemented and operated within a live system, at multiple sites within the core of a commercial ISP network. The statistical summaries are generated at city based points of presence and gathered centrally to allow for spacial and topological correlation of security events. The architecture chosen was shown to be exible in its application. The system was used to detect the level of VoIP traffic present on the network through the implementation of packet size distribution analysis in a multi-gigabit environment. It was also used to detect unsolicited SMTP generators injecting messages into the core. ii Monitoring in a commercial network environment is subject to data protec- tion legislation. Accordingly the system presented processed only network and transport layer headers, all other data being discarded at the capture interface. The system described in this thesis was operational for a period of 6 months, during which a set of over 140 network anomalies, both malicious and benign were observed over a range of localities. The system design, example anomalies and metric analysis form the majority of this thesis

    Sharing Computer Network Logs for Security and Privacy: A Motivation for New Methodologies of Anonymization

    Full text link
    Logs are one of the most fundamental resources to any security professional. It is widely recognized by the government and industry that it is both beneficial and desirable to share logs for the purpose of security research. However, the sharing is not happening or not to the degree or magnitude that is desired. Organizations are reluctant to share logs because of the risk of exposing sensitive information to potential attackers. We believe this reluctance remains high because current anonymization techniques are weak and one-size-fits-all--or better put, one size tries to fit all. We must develop standards and make anonymization available at varying levels, striking a balance between privacy and utility. Organizations have different needs and trust other organizations to different degrees. They must be able to map multiple anonymization levels with defined risks to the trust levels they share with (would-be) receivers. It is not until there are industry standards for multiple levels of anonymization that we will be able to move forward and achieve the goal of widespread sharing of logs for security researchers.Comment: 17 pages, 1 figur

    Flow-oriented anomaly-based detection of denial of service attacks with flow-control-assisted mitigation

    Get PDF
    Flooding-based distributed denial-of-service (DDoS) attacks present a serious and major threat to the targeted enterprises and hosts. Current protection technologies are still largely inadequate in mitigating such attacks, especially if they are large-scale. In this doctoral dissertation, the Computer Network Management and Control System (CNMCS) is proposed and investigated; it consists of the Flow-based Network Intrusion Detection System (FNIDS), the Flow-based Congestion Control (FCC) System, and the Server Bandwidth Management System (SBMS). These components form a composite defense system intended to protect against DDoS flooding attacks. The system as a whole adopts a flow-oriented and anomaly-based approach to the detection of these attacks, as well as a control-theoretic approach to adjust the flow rate of every link to sustain the high priority flow-rates at their desired level. The results showed that the misclassification rates of FNIDS are low, less than 0.1%, for the investigated DDOS attacks, while the fine-grained service differentiation and resource isolation provided within the FCC comprise a novel and powerful built-in protection mechanism that helps mitigate DDoS attacks

    Forward Pass: On the Security Implications of Email Forwarding Mechanism and Policy

    Full text link
    The critical role played by email has led to a range of extension protocols (e.g., SPF, DKIM, DMARC) designed to protect against the spoofing of email sender domains. These protocols are complex as is, but are further complicated by automated email forwarding -- used by individual users to manage multiple accounts and by mailing lists to redistribute messages. In this paper, we explore how such email forwarding and its implementations can break the implicit assumptions in widely deployed anti-spoofing protocols. Using large-scale empirical measurements of 20 email forwarding services (16 leading email providers and four popular mailing list services), we identify a range of security issues rooted in forwarding behavior and show how they can be combined to reliably evade existing anti-spoofing controls. We show how this allows attackers to not only deliver spoofed email messages to prominent email providers (e.g., Gmail, Microsoft Outlook, and Zoho), but also reliably spoof email on behalf of tens of thousands of popular domains including sensitive domains used by organizations in government (e.g., state.gov), finance (e.g., transunion.com), law (e.g., perkinscoie.com) and news (e.g., washingtonpost.com) among others

    Non-Trivial Off-Path Network Measurements without Shared Side-Channel Resource Exhaustion

    Get PDF
    Most traditional network measurement scans and attacks are carried out through the use of direct, on-path network packet transmission. This requires that a machine be on-path (i.e, involved in the packet transmission process) and as a result have direct access to the data packets being transmitted. This limits network scans and attacks to situations where access can be gained to an on-path machine. If, for example, a researcher wanted to measure the round trip time between two machines they did not have access to, traditional scans would be of little help as they require access to an on-path machine to function. Instead the researcher would need to use an off-path measurement scan. Prior work using network side-channels to perform off-path measurements or attacks relied on techniques that either exhausted the shared, finite resource being used as a side-channel or only measured basic features such as connectivity. The work presented in this dissertation takes a different approach to using network side-channels. I describe research that carries out network side-channel measurements that are more complex than connectivity, such as packet round-trip-time or detecting active TCP connections, and do not require a shared, finite resource be fully exhausted to cause information to leak via a side-channel. My work is able to accomplish this by understanding the ways in which internal network stack state changes cause observable behavior changes from the machine. The goal of this dissertation is to show that: Information side-channels can be modulated to take advantage of dependent, network state behavior to enable non-trivial, off-path measurements without fully exhausting the shared, finite resources they use

    IP traceback with deterministic packet marking DPM

    Get PDF
    In this dissertation, a novel approach to Internet Protocol (IP) Traceback - Deterministic Packet Marking (DPM) is presented. The proposed approach is scalable, simple to implement, and introduces no bandwidth and practically no processing overhead on the network equipment. It is capable of tracing thousands of simultaneous attackers during a Distributed Denial of Service (DDoS) attack. Given sufficient deployment on the Internet, DPM is capable of tracing back to the slaves for DDoS attacks which involve reflectors. Most of the processing is done at the victim. The traceback process can be performed post-mortem, which allows for tracing the attacks that may not have been noticed initially or the attacks which would deny service to the victim, so that traceback is impossible in real time. Deterministic Packet Marking does not introduce the errors for the reassembly errors usually associated with other packet marking schemes. More than 99.99% of fragmented traffic will not be affected by DPM. The involvement of the Internet service providers (ISP) is very limited, and changes to the infrastructure and operation required to deploy DPM are minimal. Deterministic Packet Marking performs the traceback without revealing the internal topology of the provider\u27s network, which is a desirable quality of a traceback scheme
    corecore