808 research outputs found
Self-adaptive Authorisation Infrastructures
Traditional approaches in access control rely on immutable criteria in which to decide and award access. These approaches are limited, notably when handling changes in an organisation’s protected resources, resulting in the inability to accommodate the dynamic aspects of risk at runtime. An example of such risk is a user abusing their privileged access to perform insider attacks.
This thesis proposes self-adaptive authorisation, an approach that enables dynamic access control. A framework for developing self-adaptive authorisation is defined, where autonomic controllers are deployed within legacy based authorisation infrastructures to enable the runtime management of access control. Essential to the approach is the use of models and model driven engineering (MDE).
Models enable a controller to abstract from the authorisation infrastructure it seeks to control, reason about state, and provide assurances over change to access. For example, a modelled state of access may represent an active access control policy. Given the diverse nature in implementations of authorisation infrastructures, MDE enables the creation and transformation of such models, whereby assets (e.g., policies) can be automatically generated and deployed at runtime.
A prototype of the framework was developed, whereby management of access control is focused on the mitigation of abuse of access rights. The prototype implements a feedback loop to monitor an authorisation infrastructure in terms of modelling the state of access control and user behaviour, analyse potential solutions for handling malicious behaviour, and act upon the infrastructure to control future access control decisions.
The framework was evaluated against mitigation of simulated insider attacks, involving the abuse of access rights governed by access control methodologies. In addition, to investigate the framework’s approach in a diverse and unpredictable environment, a live experiment was conducted. This evaluated the mitigation of abuse performed by real users as well as demonstrating the consequence of self-adaptation through observation of user response
Web application penetration testing: an analysis of a corporate application according to OWASP guidelines
During the past decade, web applications have become the most prevalent way for service delivery over the Internet. As they get deeply embedded in business activities and required to support sophisticated functionalities, the design and implementation are becoming more and more complicated. The increasing popularity and complexity make web applications a primary target for hackers on the Internet. According to Internet Live Stats up to February 2019, there is an enormous amount of websites being attacked every day, causing both direct and significant impact on huge amount of people.
Even with support from security specialist, they continue having troubles due to the complexity of penetration procedures and the vast amount of testing case in both penetration testing and code reviewing. As a result, the number of hacked websites per day is increasing.
The goal of this thesis is to summarize the most common and critical vulnerabilities that can be found in a web application, provide a detailed description of them, how they could be exploited and how a cybersecurity tester can find them through the process of penetration testing.
To better understand the concepts exposed, there will be also a description of a case of study: a penetration test performed over a company's web application
User-Behavior Based Detection of Infection Onset
A major vector of computer infection is through exploiting software or design flaws in networked applications such as the browser. Malicious code can be fetched and executed on a victim’s machine without the user’s permission, as in drive-by download (DBD) attacks. In this paper, we describe a new tool called DeWare for detecting the onset of infection delivered through vulnerable applications. DeWare explores and enforces causal relationships between computer-related human behaviors and system properties, such as file-system access and process execution. Our tool can be used to provide real time protection of a personal computer, as well as for diagnosing and evaluating untrusted websites for forensic purposes. Besides the concrete DBD detection solution, we also formally define causal relationships between user actions and system events on a host. Identifying and enforcing correct causal relationships have important applications in realizing advanced and secure operating systems. We perform extensive experimental evaluation, including a user study with 21 participants, thousands of legitimate websites (for testing false alarms), as well as 84 malicious websites in the wild. Our results show that DeWare is able to correctly distinguish legitimate download events from unauthorized system events with a low false positive rate (< 1%)
Modeling Cyber Situational Awareness through Data Fusion
Cyber attacks are compromising networks faster than administrators can respond. Network defenders are unable to become oriented with these attacks, determine the potential impacts, and assess the damages in a timely manner. Since the observations of network sensors are normally disjointed, analysis of the data is overwhelming and time is not spent efficiently. Automation in defending cyber networks requires a level of reasoning for adequate response. Current automated systems are mostly limited to scripted responses. Better defense tools are required. This research develops a framework that aggregates data from heterogeneous network sensors. The collected data is correlated into a single model that is easily interpreted by decision-making entities. This research proposes and tests an impact rating system that estimates the feasibility of an attack and its potential level of impact against the targeted network host as well the other hosts that reside on the network. The impact assessments would allow decision makers to prioritize attacks in real-time and attempt to mitigate the attacks in order of their estimated impact to the network. The ultimate goal of this system is to provide computer network defense tools the situational awareness required to make the right decisions to mitigate cyber attacks in real-time
Security slicing for auditing XML, XPath, and SQL injection vulnerabilities
XML, XPath, and SQL injection vulnerabilities are among the most common and serious security issues for Web applications and Web services. Thus, it is important for security auditors to ensure that the implemented code is, to the extent pos- sible, free from these vulnerabilities before deployment. Although existing taint analysis approaches could automatically detect potential vulnerabilities in source code, they tend to generate many false warnings. Furthermore, the produced traces, i.e. data- flow paths from input sources to security-sensitive operations, tend to be incomplete or to contain a great deal of irrelevant infor- mation. Therefore, it is difficult to identify real vulnerabilities and determine their causes. One suitable approach to support security auditing is to compute a program slice for each security-sensitive operation, since it would contain all the information required for performing security audits (Soundness). A limitation, however, is that such slices may also contain information that is irrelevant to security (Precision), thus raising scalability issues for security audits. In this paper, we propose an approach to assist security auditors by defining and experimenting with pruning techniques to reduce original program slices to what we refer to as security slices, which contain sound and precise information. To evaluate the proposed pruning mechanism by using a number of open source benchmarks, we compared our security slices with the slices generated by a state-of-the-art program slicing tool. On average, our security slices are 80% smaller than the original slices, thus suggesting significant reduction in auditing costs
Security slicing for auditing common injection vulnerabilities
Cross-site scripting and injection vulnerabilities are among the most common and serious security issues for Web applications. Although existing static analysis approaches can detect potential vulnerabilities in source code, they generate many false warnings and source-sink traces with irrelevant information, making their adoption impractical for security auditing.
One suitable approach to support security auditing is to compute a program slice for each sink, which contains all the information required for security auditing. However, such slices are likely to contain a large amount of information that is irrelevant to security, thus raising scalability issues for security audits.
In this paper, we propose an approach to assist security auditors by defining and experimenting with pruning techniques to reduce original program slices to what we refer to as security slices, which contain sound and precise information.
To evaluate the proposed approach, we compared our security slices to the slices generated by a state-of-the-art program slicing tool, based on a number of open-source benchmarks. On average, our security slices are 76% smaller than the original slices. More importantly, with security slicing, one needs to audit approximately 1% of the total code to fix all the vulnerabilities, thus suggesting significant reduction in auditing costs
Towards Protection Against Low-Rate Distributed Denial of Service Attacks in Platform-as-a-Service Cloud Services
Nowadays, the variety of technology to perform daily tasks is abundant and different business
and people benefit from this diversity. The more technology evolves, more useful it gets and in
contrast, they also become target for malicious users. Cloud Computing is one of the technologies
that is being adopted by different companies worldwide throughout the years. Its popularity
is essentially due to its characteristics and the way it delivers its services. This Cloud expansion
also means that malicious users may try to exploit it, as the research studies presented throughout
this work revealed. According to these studies, Denial of Service attack is a type of threat
that is always trying to take advantage of Cloud Computing Services.
Several companies moved or are moving their services to hosted environments provided by Cloud
Service Providers and are using several applications based on those services. The literature on
the subject, bring to attention that because of this Cloud adoption expansion, the use of applications
increased. Therefore, DoS threats are aiming the Application Layer more and additionally,
advanced variations are being used such as Low-Rate Distributed Denial of Service attacks.
Some researches are being conducted specifically for the detection and mitigation of this kind
of threat and the significant problem found within this DDoS variant, is the difficulty to differentiate
malicious traffic from legitimate user traffic. The main goal of this attack is to exploit
the communication aspect of the HTTP protocol, sending legitimate traffic with small changes
to fill the requests of a server slowly, resulting in almost stopping the access of real users to
the server resources during the attack.
This kind of attack usually has a small time window duration but in order to be more efficient,
it is used within infected computers creating a network of attackers, transforming into
a Distributed attack. For this work, the idea to battle Low-Rate Distributed Denial of Service
attacks, is to integrate different technologies inside an Hybrid Application where the main goal
is to identify and separate malicious traffic from legitimate traffic. First, a study is done to
observe the behavior of each type of Low-Rate attack in order to gather specific information
related to their characteristics when the attack is executing in real-time. Then, using the Tshark
filters, the collection of those packet information is done. The next step is to develop combinations
of specific information obtained from the packet filtering and compare them. Finally,
each packet is analyzed based on these combinations patterns. A log file is created to store the
data gathered after the Entropy calculation in a friendly format.
In order to test the efficiency of the application, a Cloud virtual infrastructure was built using
OpenNebula Sandbox and Apache Web Server. Two tests were done against the infrastructure,
the first test had the objective to verify the effectiveness of the tool proportionally against the
Cloud environment created. Based on the results of this test, a second test was proposed to
demonstrate how the Hybrid Application works against the attacks performed. The conclusion
of the tests presented how the types of Slow-Rate DDoS can be disruptive and also exhibited
promising results of the Hybrid Application performance against Low-Rate Distributed Denial of
Service attacks. The Hybrid Application was successful in identify each type of Low-Rate DDoS,
separate the traffic and generate few false positives in the process. The results are displayed
in the form of parameters and graphs.Actualmente, a variedade de tecnologias que realizam tarefas diárias é abundante e diferentes
empresas e pessoas se beneficiam desta diversidade. Quanto mais a tecnologia evolui, mais
usual se torna, em contraposição, essas empresas acabam por se tornar alvo de actividades maliciosas.
Computação na Nuvem é uma das tecnologias que vem sendo adoptada por empresas
de diferentes segmentos ao redor do mundo durante anos. Sua popularidade se deve principalmente
devido as suas caracterÃsticas e a maneira com o qual entrega seus serviços ao cliente.
Esta expansão da Computação na Nuvem também implica que usuários maliciosos podem tentar
explorá-la, como revela estudos de pesquisas apresentados ao longo deste trabalho. De acordo
também com estes estudos, Ataques de Negação de Serviço são um tipo de ameaça que sempre
estão a tentar tirar vantagens dos serviços de Computação na Nuvem.
Várias empresas moveram ou estão a mover seus serviços para ambientes hospedados fornecidos
por provedores de Computação na Nuvem e estão a utilizar várias aplicações baseadas nestes
serviços. A literatura existente sobre este tema chama atenção sobre o fato de que, por conta
desta expansão na adopção à serviços na Nuvem, o uso de aplicações aumentou. Portanto,
ameaças de Negação de Serviço estão visando mais a camada de aplicação e também, variações
de ataques mais avançados estão sendo utilizadas como Negação de Serviço DistribuÃda de Baixa
Taxa. Algumas pesquisas estão a ser feitas relacionadas especificamente para a detecção e mitigação
deste tipo de ameaça e o maior problema encontrado nesta variante é diferenciar tráfego
malicioso de tráfego legÃtimo. O objectivo principal desta ameaça é explorar a maneira como o
protocolo HTTP trabalha, enviando tráfego legÃtimo com pequenas modificações para preencher
as solicitações feitas a um servidor lentamente, tornando quase impossÃvel para usuários legÃtimos
aceder os recursos do servidor durante o ataque.
Este tipo de ataque geralmente tem uma janela de tempo curta mas para obter melhor eficiência,
o ataque é propagado utilizando computadores infectados, criando uma rede de ataque,
transformando-se em um ataque distribuÃdo. Para este trabalho, a ideia para combater Ataques
de Negação de Serviço DistribuÃda de Baixa Taxa é integrar diferentes tecnologias dentro de uma
Aplicação HÃbrida com o objectivo principal de identificar e separar tráfego malicioso de tráfego
legÃtimo. Primeiro, um estudo é feito para observar o comportamento de cada tipo de Ataque
de Baixa Taxa, a fim de recolher informações especÃficas relacionadas à s suas caracterÃsticas
quando o ataque é executado em tempo-real. Então, usando os filtros do programa Tshark, a
obtenção destas informações é feita. O próximo passo é criar combinações das informações especÃficas
obtidas dos pacotes e compará-las. Então finalmente, cada pacote é analisado baseado
nos padrões de combinações feitos. Um arquivo de registo é criado ao fim para armazenar os
dados recolhidos após o cálculo da Entropia em um formato amigável.
A fim de testar a eficiência da Aplicação HÃbrida, uma infra-estrutura Cloud virtual foi construÃda
usando OpenNebula Sandbox e servidores Apache. Dois testes foram feitos contra a
infra-estrutura, o primeiro teste teve o objectivo de verificar a efectividade da ferramenta
proporcionalmente contra o ambiente de Nuvem criado. Baseado nos resultados deste teste,
um segundo teste foi proposto para verificar o funcionamento da Aplicação HÃbrida contra os
ataques realizados. A conclusão dos testes mostrou como os tipos de Ataques de Negação de
Serviço DistribuÃda de Baixa Taxa podem ser disruptivos e também revelou resultados promissores relacionados ao desempenho da Aplicação HÃbrida contra esta ameaça. A Aplicação HÃbrida
obteve sucesso ao identificar cada tipo de Ataque de Negação de Serviço DistribuÃda de Baixa
Taxa, em separar o tráfego e gerou poucos falsos positivos durante o processo. Os resultados
são exibidos em forma de parâmetros e grafos
Cross Site Scripting Attacks in Web-Based Applications
Web-based applications has turn out to be very prevalent due to the ubiquity of web browsers to deliver service oriented application on-demand to diverse client over the Internet and cross site scripting (XSS) attack is a foremost security risk that has continuously ravage the web applications over the years. This paper critically examines the concept of XSS and some recent approaches for detecting and preventing XSS attacks in terms of architectural framework, algorithm used, solution location, and so on. The techniques were analysed and results showed that most of the available recognition and avoidance solutions to XSS attacks are more on the client end than the server end because of the peculiar nature of web application vulnerability and they also lack support for self-learning ability in order to detect new XSS attacks. Few researchers as cited in this paper inculcated the self-learning ability to detect and prevent XSS attacks in their design architecture using artificial neural networks and soft computing approach; a lot of improvement is still needed to effectively and efficiently handle the web application security menace as recommended
Cyber indicators of compromise: a domain ontology for security information and event management
It has been said that cyber attackers are attacking at wire speed (very fast), while cyber defenders are defending at human speed (very slow). Researchers have been working to improve this asymmetry by automating a greater portion of what has traditionally been very labor-intensive work. This work is involved in both the monitoring of live system events (to detect attacks), and the review of historical system events (to investigate attacks). One technology that is helping to automate this work is Security Information and Event Management (SIEM). In short, SIEM technology works by aggregating log information, and then sifting through this information looking for event correlations that are highly indicative of attack activity. For example: Administrator successful local logon and (concurrently) Administrator successful remote logon. Such correlations are sometimes referred to as indicators of compromise (IOCs). Though IOCs for network-based data (i.e., packet headers and payload) are fairly mature (e.g., Snort's large rule-base), the field of end-device IOCs is still evolving and lacks any well-defined go-to standard accepted by all. This report addresses ontological issues pertaining to end-device IOCs development, including what they are, how they are defined, and what dominant early standards already exist.http://archive.org/details/cyberindicatorso1094553041Lieutenant, United States NavyApproved for public release; distribution is unlimited
- …