700 research outputs found
Recommended from our members
DDoS victim service containment to minimize the internal collateral damages in cloud computing
Recent Distributed Denial of Service (DDoS) attacks on cloud services demonstrate new attack effects, including collateral and economic losses. In this work, we show that DDoS mitigation methods may not provide the expected timely mitigation due to the heavy resource outage created by the attacks. We observe an important Operating System (OS) level internal collateral damage, in which the other critical services are also affected. We formulate the DDoS mitigation problem as an OS level resource management problem. We argue that providing extra resources to the victim's server is only helpful if we can ensure the availability of other services. To achieve these goals, we propose a novel resource containment approach to enforce the victim's resource limits. Our real-time experimental evaluations show that the proposed approach results in reduction in the attack reporting time and victim service downtime by providing isolated and timely resources to ensure availability of other critical services
A Study of Very Short Intermittent DDoS Attacks on the Performance of Web Services in Clouds
Distributed Denial-of-Service (DDoS) attacks for web applications such as e-commerce are increasing in size, scale, and frequency. The emerging elastic cloud computing cannot defend against ever-evolving new types of DDoS attacks, since they exploit various newly discovered network or system vulnerabilities even in the cloud platform, bypassing not only the state-of-the-art defense mechanisms but also the elasticity mechanisms of cloud computing.
In this dissertation, we focus on a new type of low-volume DDoS attack, Very Short Intermittent DDoS Attacks, which can hurt the performance of web applications deployed in the cloud via transiently saturating the critical bottleneck resource of the target systems by means of external attack HTTP requests outside the cloud or internal resource contention inside the cloud. We have explored external attacks by modeling the n-tier web applications with queuing network theory and implementing the attacking framework based-on feedback control theory. We have explored internal attacks by investigating and exploiting resource contention and performance interference to locate a target VM (virtual machine) and degrade its performance
Towards applying FCM with DBSCAN for Detecting DDoS Attack in Cloud Infrastructure to Improve Data Transmission Rate
Cloud is a pay-to-use technology which can be used to offer IT resources instead of buying computer hardware. It is time saving and cheaper technology. This paper analyzes the DDoS attack on cloud infrastructure and can be detected by using FCM with DBSCAN hybrid algorithm that classifies the clusters of data packets and detects the outlier in that particular data packet. The experimental outcome shows that the enhanced hybrid approach has better results in detecting the DDoS attack. The DDoS attack targets the main host of the cloud infrastructure by sending unwanted packets. This attack is a major threat to the network security. The FCM with DBSCAN hybrid approach detects outliers and also assigns one specific data point in clusters to detect DDoS attack in cloud infrastructure. By using this hybrid approach the data can be grouped as clusters and the data beyond the noise level can also be detected. This algorithm helps in identifying the data that are vulnerable to DDoS attack. This detection helps in improving the data transmission rate
Recommended from our members
Scale Inside-Out: Rapid Mitigation of Cloud DDoS Attacks
The distributed denial of service (DDoS) attacks in cloud computing requires quick absorption of attack data. DDoS attack mitigation is usually achieved by dynamically scaling the cloud resources so as to quickly identify the onslaught features to combat the attack. The resource scaling comes with an additional cost which may prove to be a huge disruptive cost in the cases of longer, sophisticated, and repetitive attacks. In this work, we address an important problem, whether the resource scaling during attack, always result in rapid DDoS mitigation? For this purpose, we conduct real-time DDoS attack experiments to study the attack absorption and attack mitigation for various target services in the presence of dynamic cloud resource scaling. We found that the activities such as attack absorption which provide timely attack data input to attack analytics, are adversely compromised by the heavy resource usage generated by the attack. We show that the operating system level local resource contention, if reduced during attacks, can expedite the overall attack mitigation. The attack mitigation would otherwise not be completed by the dynamic scaling of resources alone. We conceived a novel relation which terms “Resource Utilization Factor” for each incoming request as the major component in forming the resource contention. To overcome these issues, we propose a new “Scale Inside-out” approach which during attacks, reduces the “Resource Utilization Factor” to a minimal value for quick absorption of the attack. The proposed approach sacrifices victim service resources and provides those resources to mitigation service in addition to other co-located services to ensure resource availability during the attack. Experimental evaluation shows up to 95 percent reduction in total attack downtime of the victim service in addition to considerable improvement in attack detection time, service reporting time, and downtime of co-located services
Cyber-Storms Come from Clouds: Security of Cloud Computing in the IoT Era
The Internet of Things (IoT) is rapidly changing our society to a world where
every "thing" is connected to the Internet, making computing pervasive like
never before. This tsunami of connectivity and data collection relies more and
more on the Cloud, where data analytics and intelligence actually reside. Cloud
computing has indeed revolutionized the way computational resources and
services can be used and accessed, implementing the concept of utility
computing whose advantages are undeniable for every business. However, despite
the benefits in terms of flexibility, economic savings, and support of new
services, its widespread adoption is hindered by the security issues arising
with its usage. From a security perspective, the technological revolution
introduced by IoT and Cloud computing can represent a disaster, as each object
might become inherently remotely hackable and, as a consequence, controllable
by malicious actors. While the literature mostly focuses on security of IoT and
Cloud computing as separate entities, in this article we provide an up-to-date
and well-structured survey of the security issues of Cloud computing in the IoT
era. We give a clear picture of where security issues occur and what their
potential impact is. As a result, we claim that it is not enough to secure IoT
devices, as cyber-storms come from Clouds
CLOUD COMPUTING STRATEGY FOR OVERFLOW OF DENIED DATA
The success of the cloud computing paradigm is due to its on-demand, self-service, and pay-by-use nature. According to this paradigm, the effects of Denial of Service (DoS) attacks involve not only the quality of the delivered service, but also the service maintenance costs in terms of resource consumption. Specifically, the longer the detection delay is, the higher the costs to be incurred. Therefore, a particular attention has to be paid for stealthy DoS attacks. They aim at minimizing their visibility, and at the same time, they can be as harmful as the brute-force attacks. They are sophisticated attacks tailored to leverage the worst-case performance of the target system through specific periodic, pulsing, and low-rate traffic patterns. In this paper, we propose a strategy to orchestrate stealthy attack patterns, which exhibit a slowly-increasing-intensity trend designed to inflict the maximum financial cost to the cloud customer, while respecting the job size and the service arrival rate imposed by the detection mechanisms. We describe both how to apply the proposed strategy, and its effects on the target system deployed in the cloud
Hardening an Open-Source Governance Risk and Compliance Software: Eramba
Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2020Lições históricas como Chernobyl, Fukushima ou o colapso da ponte de Mississípi revelam a vital importância da gestão de risco. Para além de saber gerir o risco, as empresas têm de desenvolver planos para se precaverem e oferecerem resiliência a qualquer ameaça que possam enfrentar, desde desastres naturais e terrorismo a ciberataques e propagação de vírus. Estes planos são denominados de planos de continuidade de negócio. A crucialidade destes planos e a introdução de novas leis como Lei Sarbanes-Oxley, Diretiva Europeia 2006/43/EC VIII e recentemente do Regulamento de Protecção de Dados geraram uma maior preocupação e sensibilidade nas empresas em aglomerar todos estes processos de governança, risco e conformidade (GRC). GRC integra a implementação da gestão de risco, planos de continuidade de negócio, conformidade com as leis e boas práticas de auditoria externa e interna. As empresas necessitam de uma ferramenta que ofereça uma visão global da Governança, Risco e Conformidade. No entanto, estas ferramentas são por norma dispendiosas, o que faz com que pequenas e médias empresas não tenham meios para suportar o custo. Consequentemente, estas empresas tendem a adoptar ferramentas de código aberto, como SimpleRisk, Envelop ou Eramba. Apesar de suportarem o GRC, existem vários problemas com as aplicações deste tipo, como a falta de manutenção, problemas de migração, dificuldade de escalabilidade, a necessidade constante de fazer atualizações e a grande curva de aprendizagem associada. A Ernst & Young agora conhecida como EY oferece serviços de Consulting, Assurance, Tax e de Strategy and Transaction para ajudar a resolver desafios mais difíceis dos seus clientes e criar valor. Para se preparar para uma futura auditoria, um cliente da EY pertencente ao sector bancário procura ser certificado em ISO/IEC 27001 e ISO/IEC 22301, referentes a Sistema de Gestão de Segurança de Informação (SGSI) e Sistema de Gestão de Continuidade de Negócio (SGCN), respectivamente. Adicionalmente, o cliente visa migrar a sua infraestrutura no local para uma infraestrutura na cloud. Com todos estes fatores em conta, a EY recomendou uma ferramenta de código aberto de GRC chamada Eramba. Esta tese propõe um estudo profundo das vulnerabilidades que o Eramba pode oferecer assim como uma solução para as resolver através de armazenamento em nuvem. Seguindo uma metodologia de pentesting chamada PTES para o estudo de vulnerabilidades foi possível identificar dez vulnerabilidades sendo quase todas de baixo nível. A metodologia PTES recomenda o uso de adoção de modelo de ameaças de modo a perceber como os processos estão correlacionados, onde estão armazenados dados importantes, quais são os principais ativos e como é processado um pedido na aplicação. Para fazer esta modelação foi seguido uma metodologia proposta pela Microsof nomeada de STRIDE, esta metodologia é uma mnemónica para Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service e Elevation of Privilege. A Microsoft propõe um modelo de ameaças em quatro passos: modelação do sistema através de Data Flow Diagrams; encontrar ameaças e consequentemente classificá-las através da nomenclatura STRIDE; endereçar ameaças mitigando e eliminando-as e validar se cada uma foi realmente endereçada com sucesso. De modo a endereçar estes dois últimos passos e para conjugar com os requisitos da empresa de migração para armazenamento na nuvem foi desenvolvido uma solução de tornar o Eramba num container para então usufruir da orquestração de containers que é o Kubernetes. Como resultado, a partir do trabalho desenvolvido é possível que qualquer organização adapte esta solução de GRC e consiga hospedar na nuvem sem enfrentar dificuldades. Este trabalho proporcionou analisar a viabilidade da ferramenta Eramba a longo prazo por qualquer organização e perceber se este é escalável.Historical lessons such as Chernobyl, Fukushima or the collapse of the Mississippi bridge showcase the vital importance of risk management. In addition to managing risk, companies must develop plans to safeguard against and offer resilience to any threat they may face, from natural disasters and terrorism to cyber-attacks and the spread of viruses. These plans are called business continuity plans. The cruciality of these plans and the introduction of new laws such as the Sarbanes-Oxley Act, European Directive 2006/43/EC VIII and recently the Data Protection Regulation have generated greater concern and sensitivity in companies, leading them to agglomerate all these governance, risk and compliance processes (GRC). GRC integrates the implementation of risk management, business continuity plans, law compliance and good external and internal auditory practices. Companies need a tool that provides an overall view of Governance, Risk and Compliance. However, such tools are usually expensive, which means that small and mediumsized companies cannot afford the cost. Consequently, these companies tend to adopt open source tools such as SimpleRisk, Envelop or Eramba. Despite being compliant with GRC, there are several problems with applications of this type, such as lack of maintenance, migration problems, difficulty in scalability, the constant need to make updates and the large learning curve associated. Ernst & Young now known as EY offers Consulting, Assurance, Tax and Strategy and Transaction services to help solve more difficult challenges for its clients and create value. To prepare for a future audit, an EY client within the banking sector seeks to be certified in Business Continuity and Information Security. Additionally, the client aims to migrate its onsite infrastructure to a cloud infrastructure. With all these factors in mind, EY has recommended an open source tool called Eramba. This thesis proposes an in-depth study of the vulnerabilities that Eramba can face as well as a solution to solve them through cloud storage. Following a pentesting methodology called PTES for the study of vulnerabilities it was possible to identify ten vulnerabilities, almost all of which are low level. The PTES methodology recommends the use of a threat model in order to understand how processes are correlated, where important data are stored, what are the main assets and how a request is processed in the application. To make this modeling was followed a methodology proposed by Microsoft named STRIDE, this methodology is a mnemonic for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service and Elevation of Privilege. Microsoft proposes a four-step threat model: modeling the system through Data Flow Diagrams; finding threats and consequently classifying them through STRIDE nomenclature; addressing threats by mitigating and reducing them and validating whether each one has actually been successfully addressed. In order to address these last two steps and to combine them with the company’s requirements for migration to cloud storage, a solution has been developed to turn Eramba into a container to then make use of orchestration that is the Kubernetes. As a result, from the work done it is possible for any organization that is an EY customer to adapt this solution and be able to host in the cloud without facing difficulties. This project also provided an overview to analyze if Eramba is secure and scalable
- …