10 research outputs found

    Botnet Behavior Detection using Network Synchronism

    Get PDF
    Botnets diversity and dynamism challenge detection and classification algorithms, which depend heavily on botnets protocol and can quickly become avoidable. A more general detection method, then, was needed. We propose an analysis of their most inherent characteristics, like synchronism and network load combined with a detailed analysis of error rates. Not relying in any specific botnet technology or protocol, our classification approach sought to detect synchronic behavioral patterns in network traffic flows and clustered them based on botnets characteristics. Different botnet and normal captures were taken and a time slice approach was used to successfully separate them. Results show that botnets and normal computers traffic can be accurately detected by our approach and thus enhance detection effectiveness.Sociedad Argentina de Informática e Investigación Operativ

    A Historical evaluation of C&C complexity

    Get PDF
    The actions of Malware are often controlled through uniform communications mechanisms, which are regularly changing to evade detection techniques and remain prolific. Though geographically dispersed, malware-infected nodes being controlled for a common purpose can be viewed as a logically joint network, now loosely referred to as a botnet. The evolution of the mechanisms or processes for controlling the networks of malware-infected nodes may be indicative of their sophistication relative to a point of inception or discovery (if inception time is unknown). A sampling of botnet related malware at different points of inception or discovery can provide accurate representations of the sophistication variance of command and control processes. To accurately measure a sampling, a matrix of sophistication, deemed the Complexity Matrix (CM), was created to categorize the signifying characteristics of Command and Control (C&C) processes amongst a historically-diverse selection of bot binaries. In this paper, a survey of botnets is conducted to identify C&C characteristics that accurately represent the level of sophistication being implemented within a specified time frame. The results of the survey are collected in a CM and used to generate a subsequent roadmap of C&C milestones

    Network Traffic Analysis Using Stochastic Grammars

    Get PDF
    Network traffic analysis is widely used to infer information from Internet traffic. This is possible even if the traffic is encrypted. Previous work uses traffic characteristics, such as port numbers, packet sizes, and frequency, without looking for more subtle patterns in the network traffic. In this work, we use stochastic grammars, hidden Markov models (HMMs) and probabilistic context-free grammars (PCFGs), as pattern recognition tools for traffic analysis. HMMs are widely used for pattern recognition and detection. We use a HMM inference approach. With inferred HMMs, we use confidence intervals (CI) to detect if a data sequence matches the HMM. To compare HMMs, we define a normalized Markov metric. A statistical test is used to determine model equivalence. Our metric systematically removes the least likely events from both HMMs until the remaining models are statistically equivalent. This defines the distance between models. We extend the use of HMMs to PCFGs, which have more expressive power. We estimate PCFG production probabilities from data. A statistical test is used for detection. We present three applications of HMM and PCFG detection to network traffic analysis. First, we infer the presence of protocol tunneling through Tor (the onion router) anonymization network. The Markov metric quantifies the similarity of network traffic HMMs in Tor to identify the protocol. It also measures communication noise in Tor network. We use HMMs to detect centralized botnet traffic. We infer HMMs from botnet traffic data and detect botnet infections. Experimental results show that HMMs can accurately detect Zeus botnet traffic. To hide their locations better, newer botnets have P2P control structures. Hierarchical P2P botnets contain recursive and hierarchical patterns. We use PCFGs to detect P2P botnet traffic. Experimentation on real-world traffic data shows that PCFGs can accurately differentiate between P2P botnet traffic and normal Internet traffic

    Análise de tráfego malicioso direcionado a uma Honeynet com inspeção profunda de pacotes

    Get PDF
    Trabalho de Conclusão de Curso (graduação)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2017.Qualquer rede conectada à Internet é sujeita a ataques cibernéticos. Fortes medidas de segurança, ferramentas e investigadores forenses juntos contribuem na detecção e mitigação desses ataques, reduzindo os danos, possibilitando o reestabelecimento da rede a suas operações normais, e aumen- tando a segurança da rede. Este trabalho foca numa abordagem forense com Inspeção Profunda de Pacotes para detectar anomalias no tráfego de rede. Como ataques cibernéticos podem ocorrer em qualquer camada do modelo de rede TCP/IP, Inspeção Profunda de Pacotes é uma técnica efetiva para revelar conteúdo suspeito no cabeçalho ou na carga útil de qualquer pacote, exceto casos em que se faz uso de criptografia. Embora eficiente, essa técnica ainda encara desafios. As contri- buições deste estudo se dão na associação de Inspeção Profunda de Pacotes com análise forense para avaliar diferentes ataques direcionados à Honeynet operando no laboratório LATITUDE da Universidade de Brasília. Nessa perspectiva, este trabalho pôde identificar e mapear o conteúdo e comportamento de ataques como a botnet Mirai e força-bruta, alvejando diferentes serviços. Os resultados obtidos demonstram o comportamento de ataques automatizados (como worms e bots) e não automatizados (força-bruta conduzida com diferentes ferramentas). Os dados coletados e analisados são, então, usados para gerar estatísticas de nomes de usuários e senhas, distribuição de IP e serviços e outros. Este trabalho também discute a importância da Cadeia de Custódia na con- dução de uma investigação e mostra a efetividade das técnicas mencionadas em avaliar diferentes ataques de redes.Any network connected to the Internet is subject to cyber attacks. Strong security measures, forensic tools, and investigators contribute together to detect and mitigate those attacks, reducing the damages and enabling reestablishing the network to its normal operation, thus increasing the cybersecurity of the networked environment. This work addresses the use of a forensic approach with Deep Packet Inspection to detect anomalies in the network traffic. As cyber attacks may occur on any layer of the TCP/IP networking model, Deep Packet Inspection is an effective technique to reveal suspicious content in the headers or the payloads in any packet processing layer, excepting situations where the payload is encrypted. Although being efficient, this technique still faces big challenges. The contributions of this study rely on the association of Deep Packet Inspection with forensics analysis to evaluate different attacks towards a Honeynet operating in the LATITUDE laboratory at the University of Brasilia. In this perspective, this work could identify and map the content and behavior of attacks such as the Mirai botnet and brute-force attacks targeting various different network services. Obtained results demonstrate the behavior of automated attacks (such as worms and bots) and non-automated attacks (brute-force conducted with different tools). The data collected and analyzed is then used to generate statistics of used usernames and passwords, IP and services distribution, among other elements. This work also discusses the importance of network forensics and Chain of Custody procedures to conduct investigations and shows the effectiveness of the mentioned techniques in evaluating different attacks in networks

    Securing Enterprise Networks with Statistical Node Behavior Profiling

    Get PDF
    The substantial proliferation of the Internet has made it the most critical infrastructure in today\u27s world. However, it is still vulnerable to various kinds of attacks/malwares and poses a number of great security challenges. Furthermore, we have also witnessed in the past decade that there is always a fast self-evolution of attacks/malwares (e.g. from worms to botnets) against every success in network security. Network security thereby remains a hot topic in both research and industry and requires both continuous and great attention. In this research, we consider two fundamental areas in network security, malware detection and background traffic modeling, from a new view point of node behavior profiling under enterprise network environments. Our main objectives are to extend and enhance the current research in these two areas. In particular, central to our research is the node behavior profiling approach that groups the behaviors of different nodes by jointly considering time and spatial correlations. We also present an extensive study on botnets, which are believed to be the largest threat to the Internet. To better understand the botnet, we propose a botnet framework and predict a new P2P botnet that is much stronger and stealthier than the current ones. We then propose anomaly malware detection approaches based directly on the insights (statistical characteristics) from the node behavior study and apply them on P2P botnet detection. Further, by considering the worst case attack model where the botmaster knows all the parameter values used in detection, we propose a fast and optimized anomaly detection approach by formulating the detection problem as an optimization problem. In addition, we propose a novel traffic modeling structure using behavior profiles for NIDS evaluations. It is efficient and takes into account the node heterogeneity in traffic modeling. It is also compatible with most current modeling schemes and helpful in generating better realistic background traffic. Last but not least, we evaluate the proposed approaches using real user trace from enterprise networks and achieve encouraging results. Our contributions in this research include: 1) a new node behavior profiling approach to study the normal node behavior; 2) a framework for botnets; 3) a new P2P botnet and performance comparisons with other P2P botnets; 4) two anomaly detection approaches based on node behavior profiles; 4) a fast and optimized anomaly detection approach under the worst case attack model; 5) a new traffic modeling structure and 6) simulations and evaluations of the above approaches under real user data from enterprise networks. To the best of our knowledge, we are the first to propose the botnet framework, consider the worst case attack model and propose corresponding fast and optimized solution in botnet related research. We are also the first to propose efficient solutions in traffic modeling without the assumption of node homogeneity

    Tracking and Mitigation of Malicious Remote Control Networks

    Full text link
    Attacks against end-users are one of the negative side effects of today’s networks. The goal of the attacker is to compromise the victim’s machine and obtain control over it. This machine is then used to carry out denial-of-service attacks, to send out spam mails, or for other nefarious purposes. From an attacker’s point of view, this kind of attack is even more efficient if she manages to compromise a large number of machines in parallel. In order to control all these machines, she establishes a "malicious remote control network", i.e., a mechanism that enables an attacker the control over a large number of compromised machines for illicit activities. The most common type of these networks observed so far are so called "botnets". Since these networks are one of the main factors behind current abuses on the Internet, we need to find novel approaches to stop them in an automated and efficient way. In this thesis we focus on this open problem and propose a general root cause methodology to stop malicious remote control networks. The basic idea of our method consists of three steps. In the first step, we use "honeypots" to collect information. A honeypot is an information system resource whose value lies in unauthorized or illicit use of that resource. This technique enables us to study current attacks on the Internet and we can for example capture samples of autonomous spreading malware ("malicious software") in an automated way. We analyze the collected data to extract information about the remote control mechanism in an automated fashion. For example, we utilize an automated binary analysis tool to find the Command & Control (C&C) server that is used to send commands to the infected machines. In the second step, we use the extracted information to infiltrate the malicious remote control networks. This can for example be implemented by impersonating as a bot and infiltrating the remote control channel. Finally, in the third step we use the information collected during the infiltration phase to mitigate the network, e.g., by shutting down the remote control channel such that the attacker cannot send commands to the compromised machines. In this thesis we show the practical feasibility of this method. We examine different kinds of malicious remote control networks and discuss how we can track all of them in an automated way. As a first example, we study botnets that use a central C&C server: We illustrate how the three steps can be implemented in practice and present empirical measurement results obtained on the Internet. Second, we investigate botnets that use a peer-to-peer based communication channel. Mitigating these botnets is harder since no central C&C server exists which could be taken offline. Nevertheless, our methodology can also be applied to this kind of networks and we present empirical measurement results substantiating our method. Third, we study fast-flux service networks. The idea behind these networks is that the attacker does not directly abuse the compromised machines, but uses them to establish a proxy network on top of these machines to enable a robust hosting infrastructure. Our method can be applied to this novel kind of malicious remote control networks and we present empirical results supporting this claim. We anticipate that the methodology proposed in this thesis can also be used to track and mitigate other kinds of malicious remote control networks

    Immunology Inspired Detection of Data Theft from Autonomous Network Activity

    Get PDF
    The threat of data theft posed by self-propagating, remotely controlled bot malware is increasing. Cyber criminals are motivated to steal sensitive data, such as user names, passwords, account numbers, and credit card numbers, because these items can be parlayed into cash. For anonymity and economy of scale, bot networks have become the cyber criminal’s weapon of choice. In 2010 a single botnet included over one million compromised host computers, and one of the largest botnets in 2011 was specifically designed to harvest financial data from its victims. Unfortunately, current intrusion detection methods are unable to effectively detect data extraction techniques employed by bot malware. The research described in this Dissertation Report addresses that problem. This work builds on a foundation of research regarding artificial immune systems (AIS) and botnet activity detection. This work is the first to isolate and assess features derived from human computer interaction in the detection of data theft by bot malware and is the first to report on a novel use of the HTTP protocol by a contemporary variant of the Zeus bot

    Correlation-based Botnet Detection in Enterprise Networks

    Get PDF
    Most of the attacks and fraudulent activities on the Internet are carried out by malware. In particular, botnets, as state-of-the-art malware, are now considered as the largest threat to Internet security. In this thesis, we focus on addressing the botnet detection problem in an enterprise-like network environment. We present a comprehensive correlation-based framework for multi-perspective botnet detection consisting of detection technologies demonstrated in four complementary systems: BotHunter, BotSniffer, BotMiner, and BotProbe. The common thread of these systems is correlation analysis, i.e., vertical correlation (dialog correlation), horizontal correlation, and cause-effect correlation. All these Bot* systems have been evaluated in live networks and/or real-world network traces. The evaluation results show that they can accurately detect real-world botnets for their desired detection purposes with a very low false positive rate. We find that correlation analysis techniques are of particular value for detecting advanced malware such as botnets. Dialog correlation can be effective as long as malware infections need multiple stages. Horizontal correlation can be effective as long as malware tends to be distributed and coordinated. In addition, active techniques can greatly complement passive approaches, if carefully used. We believe our experience and lessons are of great benefit to future malware detection.Ph.D.Committee Chair: Lee,Wenke; Committee Member: Ahamad,Mustaque; Committee Member: Feamster,Nick; Committee Member: Giffin,Jonathon; Committee Member: Ji,Chuany
    corecore