486 research outputs found

    Framework for botnet emulation and analysis

    Get PDF
    Criminals use the anonymity and pervasiveness of the Internet to commit fraud, extortion, and theft. Botnets are used as the primary tool for this criminal activity. Botnets allow criminals to accumulate and covertly control multiple Internet-connected computers. They use this network of controlled computers to flood networks with traffic from multiple sources, send spam, spread infection, spy on users, commit click fraud, run adware, and host phishing sites. This presents serious privacy risks and financial burdens to businesses and individuals. Furthermore, all indicators show that the problem is worsening because the research and development cycle of the criminal industry is faster than that of security research. To enable researchers to measure botnet connection models and counter-measures, a flexible, rapidly augmentable framework for creating test botnets is provided. This botnet framework, written in the Ruby language, enables researchers to run a botnet on a closed network and to rapidly implement new communication, spreading, control, and attack mechanisms for study. This is a significant improvement over augmenting C++ code-bases for the most popular botnets, Agobot and SDBot. Rubot allows researchers to implement new threats and their corresponding defenses before the criminal industry can. The Rubot experiment framework includes models for some of the latest trends in botnet operation such as peer-to-peer based control, fast-flux DNS, and periodic updates. Our approach implements the key network features from existing botnets and provides the required infrastructure to run the botnet in a closed environment.Ph.D.Committee Chair: Copeland, John; Committee Member: Durgin, Gregory; Committee Member: Goodman, Seymour; Committee Member: Owen, Henry; Committee Member: Riley, Georg

    Discovery of Web Attacks by Inspecting HTTPS Network Traffic with Machine Learning and Similarity Search

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2022Web applications are the building blocks of many services, from social networks to banks. Network security threats have remained a permanent concern since the advent of data communication. Not withstanding, security breaches are still a serious problem since web applications incorporate both company information and private client data. Traditional Intrusion Detection Systems (IDS) inspect the payload of the packets looking for known intrusion signatures or deviations from nor mal behavior. However, this Deep Packet Inspection (DPI) approach cannot inspect encrypted network traffic of Hypertext Transfer Protocol Secure (HTTPS), a protocol that has been widely adopted nowadays to protect data communication. We are interested in web application attacks, and to accurately detect them, we must access the payload. Network flows are able to aggregate flows of traffic with common properties, so they can be employed for inspecting large amounts of traffic. The main objective of this thesis is to develop a system to discover anomalous HTTPS traffic and confirm that the payloads included in it contains web applications attacks. We propose a new reliable method and system to identify traffic that may include web application attacks by analysing HTTPS network flows (netflows) and discovering payload content similarities. We resort to unsupervised machine learning algorithms to cluster netflows and identify anomalous traffic and to Locality Sensitive Hashing (LSH) algorithms to create a Similarity Search Engine (SSE) capable of correctly identifying the presence of known web applications attacks over this traffic. We involve the system in a continuous improvement process to keep a reliable detection as new web applications attacks are discovered. We evaluated the system, which showed that it could detect anomalous traffic, the SSE was able to confirm the presence of web attacks into that anomalous traffic, and the continuous improvement process was able to increase the accuracy of the SSE

    Improving security of web applications based on mainstream technology

    Full text link
    [ES] There is an enormous quantity of web applications with little or no security configurations at all that are accessible through the Internet. Many of these were academic exercises that were engineered from a purely functional perspective; this leads to poorly specialized configurations and to a lack of security. Many of these applications have been later abandoned by their programmers or simply kept as initially deployed, whereas being still public and fully accessible. On the one hand, the project focuses on the stydy and analysis of selected state of the art web technologies and some of their related security flaws. As web programming technologies evolve, new flaws appear; and also appear the recommended programming patterns to overcome these flaws. Overall, cibersecurity in web systems is a dynamic and evolving process that requires much effort in continuous analysis of systems, of web programming tools, and systems prototyping. For this purpose, an initial basic set up of a web server will be put in place to analyze selected flaws on the field and selected common security missconfigurations. This work will be exemplified on a prototype application comprising a server and a set of IoT nodes monitored by the server. This basic prototype will help to illustrate some security missconfigurations that are frequent and part of OWASP Top 10. Then, a set of recommendations for their configuration and public set up will be designed and programmed.[EN] There is an enormous quantity of web applications with little or no security configurations at all that are accessible through the Internet. Many of these were academic exercises that were engineered from a purely functional perspective; this leads to poorly specialized configurations and to a lack of security. Many of these applications have been later abandoned by their programmers or simply kept as initially deployed, whereas being still public and fully accessible. On the one hand, the project focuses on the stydy and analysis of selected state of the art web technologies and some of their related security flaws. As web programming technologies evolve, new flaws appear; and also appear the recommended programming patterns to overcome these flaws. Overall, cibersecurity in web systems is a dynamic and evolving process that requires much effort in continuous analysis of systems, of web programming tools, and systems prototyping. For this purpose, an initial basic set up of a web server will be put in place to analyze selected flaws on the field and selected common security missconfigurations. This work will be exemplified on a prototype application comprising a server and a set of IoT nodes monitored by the server. This basic prototype will help to illustrate some security missconfigurations that are frequent and part of OWASP Top 10. Then, a set of recommendations for their configuration and public set up will be designed and programmed.Song, L. (2022). Improving security of web applications based on mainstream technology. Universitat Politècnica de València. http://hdl.handle.net/10251/182467TFG

    Ransomware: A New Era of Digital Terrorism

    Get PDF
    This work entails the study of ten nasty ransomwares to reveal out the analytical similarities and differences among them, which will help in understanding the mindset of cyber crooks crawling over the dark net. It also reviews the traps used by ransomware for its distribution and side by side examining the new possibilities of its dispersal. It conclude by divulging inter-relationship between various distribution approaches adopted by ransomwares and some attentive measures to hinder the ransomware and supporting alertness as ultimate tool of defense at user’s hand

    External servers security

    Full text link
    Romero Barrero, D. (2010). External servers security. http://hdl.handle.net/10251/9111.Archivo delegad

    Nástroj pro penetrační testování webových aplikací

    Get PDF
    Abstract As hackers become more skilled and sophisticated and with cyber-attacks becoming the norm, it is more important than ever before to undertake regular vulnerability scans and penetration testing to identify vulnerabilities and ensure on a regular basis that the cyber controls are working. In this thesis the importance and working of penetration testing and web application based penetration testing are discussed, followed by comparison and information’s about various testing tools and techniques and their advantages and disadvantages. The next section of the thesis mainly focuses on the past, current and future state of penetration testing in the computer systems and application security, importance of General Data Protection Regulation (GDPR) and Content Management system (CMS) followed by the main goal of the thesis which explains the existing solutions in automated tools for vulnerability detection of web application their techniques, positive and negative results of the conducted tests and their merits and demerits. In the next section, based on the comparison of various existing tools selecting appropriate algorithm for discussing the importance of scanning the ports which are usually focused in very few existing web application tools, the following section practically demonstrate the scanning of ports which gives information regarding, the state of ports to understand the service information running on the server. Finally the result of the experiment will be compared with the existing web application tools.Abstraktní Vzhledem k tomu, že se hackeři stávají zkušenějšími a sofistikovanějšími a kybernetické útoky se stávají normou, je důležitější než kdy jindy provádět pravidelné kontroly zranitelnosti a penetrační testování, aby bylo možné identifikovat zranitelná místa a pravidelně zajišťovat fungování kybernetických kontrol. V této práci je diskutován význam a fungování penetračního testování a penetračního testování založeného na webových aplikacích, následuje srovnání a informace o různých testovacích nástrojích a technikách a jejich výhodách a nevýhodách. Další část práce se zaměřuje především na minulý, současný a budoucí stav penetračního testování v počítačových systémech a zabezpečení aplikací, význam nařízení o obecné ochraně údajů (GDPR) a redakčního systému (CMS) následovaného hlavním cílem práce, která vysvětluje stávající řešení v automatizovaných nástrojích pro zjišťování zranitelnosti webové aplikace, jejich techniky, pozitivní a negativní výsledky provedených testů a jejich přednosti a nedostatky. V další části, založené na srovnání různých existujících nástrojů, které vybírají vhodný algoritmus pro diskusi o důležitosti skenování portů, které jsou obvykle zaměřeny na velmi málo stávajících webových aplikací, následující část prakticky demonstruje skenování portů, které poskytují informace týkající se, stav portů pro pochopení informací o službě běžících na serveru. Nakonec bude výsledek experimentu porovnán s existujícími nástroji webové aplikace.460 - Katedra informatikyvelmi dobř

    Towards understanding and mitigating attacks leveraging zero-day exploits

    Get PDF
    Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future

    XSS attack detection based on machine learning

    Get PDF
    As the popularity of web-based applications grows, so does the number of individuals who use them. The vulnerabilities of those programs, however, remain a concern. Cross-site scripting is a very prevalent assault that is simple to launch but difficult to defend against. That is why it is being studied. The current study focuses on artificial systems, such as machine learning, which can function without human interaction. As technology advances, the need for maintenance is increasing. Those maintenance systems, on the other hand, are becoming more complex. This is why machine learning technologies are becoming increasingly important in our daily lives. This study use supervised machine learning to protect against cross-site scripting, which allows the computer to find an algorithm that can identify vulnerabilities. A large collection of datasets serves as the foundation for this technique. The model will be equipped with functions extracted from datasets that will allow it to learn the model of such an attack by filtering it using common Javascript symbols or possible Document Object Model (DOM) syntax. As long as the research continues, the best conjugate algorithms will be discovered that can successfully fight against cross-site scripting. It will do multiple comparisons between different classification methods on their own or in combination to determine which one performs the best.À medida que a popularidade dos aplicativos da internet cresce, aumenta também o número de indivíduos que os utilizam. No entanto, as vulnerabilidades desses programas continuam a ser uma preocupação para o uso da internet no dia-a-dia. O cross-site scripting é um ataque muito comum que é simples de lançar, mas difícil de-se defender. Por isso, é importante que este ataque possa ser estudado. A tese atual concentra-se em sistemas baseados na utilização de inteligência artificial e Aprendizagem Automática (ML), que podem funcionar sem interação humana. À medida que a tecnologia avança, a necessidade de manutenção também vai aumentando. Por outro lado, estes sistemas vão tornando-se cada vez mais complexos. É, por isso, que as técnicas de machine learning torna-se cada vez mais importantes nas nossas vidas diárias. Este trabalho baseia-se na utilização de Aprendizagem Automática para proteger contra o ataque cross-site scripting, o que permite ao computador encontrar um algoritmo que tem a possibilidade de identificar as vulnerabilidades. Uma grande coleção de conjuntos de dados serve como a base para a abordagem proposta. A máquina virá ser equipada com o processamento de linguagem natural, o que lhe permite a aprendizagem do padrão de tal ataque e filtrando-o com o uso da mesma linguagem, javascript, que é possível usar para controlar os objectos DOM (Document Object Model). Enquanto a pesquisa continua, os melhores algoritmos conjugados serão descobertos para que possam prever com sucesso contra estes ataques. O estudo fará várias comparações entre diferentes métodos de classificação por si só ou em combinação para determinar o que tiver melhor desempenho
    corecore